We use MDCS with SLURM on a local HPC cluster, and in principle the integration of MDCS with SLURM has worked following the instruction found here. We had to make a fix in the file communicatingJobWrapper.sh as described here. However, now sometimes communicating jobs are not finished correctly and I was able to track down the problem to being related to the change above. Basically, the wrapper script hangs when trying to stop the SMPD:
$ tail Job32.log
[1]2017-03-09 10:56:19 | About to exit with code: 0
[3]2017-03-09 10:56:19 | dctEvaluateFunctionArray calling: iExitFunction with args
[0]2017-03-09 10:56:19 | dctEvaluateFunctionArray calling: iExitFunction with args
[3]2017-03-09 10:56:19 | About to exit MATLAB normally
[0]2017-03-09 10:56:19 | About to exit MATLAB normally
[3]2017-03-09 10:56:19 | About to exit with code: 0
[0]2017-03-09 10:56:19 | About to exit with code: 0
Stopping SMPD ...
srun --ntasks-per-node=1 --ntasks=3 /cm/shared/uniol/software/MATLAB/2016b/bin/mw_smpd -shutdown -phrase MATLAB -port 27223
srun: Job step creation temporarily disabled, retrying
This happens whenever a single node has allocated only a single CPU/core (we use select/cons_res with CR_CPU_MEMORY). In that case the srun running in the background is preventing the srun for the SMPD-shutdown to allocate ressources.
I can think of of only one way to resolve this problem using OverSubscribe (which we currently have turned off). Is there another way? The JobWrapper script we use is attached.