Communicating MDCS Jobs with SLURM are not finished correctly

3 次查看(过去 30 天)
We use MDCS with SLURM on a local HPC cluster, and in principle the integration of MDCS with SLURM has worked following the instruction found here. We had to make a fix in the file communicatingJobWrapper.sh as described here.
However, now sometimes communicating jobs are not finished correctly and I was able to track down the problem to being related to the change above. Basically, the wrapper script hangs when trying to stop the SMPD:
$ tail Job32.log
[1]2017-03-09 10:56:19 | About to exit with code: 0
[3]2017-03-09 10:56:19 | dctEvaluateFunctionArray calling: iExitFunction with args
[0]2017-03-09 10:56:19 | dctEvaluateFunctionArray calling: iExitFunction with args
[3]2017-03-09 10:56:19 | About to exit MATLAB normally
[0]2017-03-09 10:56:19 | About to exit MATLAB normally
[3]2017-03-09 10:56:19 | About to exit with code: 0
[0]2017-03-09 10:56:19 | About to exit with code: 0
Stopping SMPD ...
srun --ntasks-per-node=1 --ntasks=3 /cm/shared/uniol/software/MATLAB/2016b/bin/mw_smpd -shutdown -phrase MATLAB -port 27223
srun: Job step creation temporarily disabled, retrying
This happens whenever a single node has allocated only a single CPU/core (we use select/cons_res with CR_CPU_MEMORY). In that case the srun running in the background is preventing the srun for the SMPD-shutdown to allocate ressources.
I can think of of only one way to resolve this problem using OverSubscribe (which we currently have turned off). Is there another way? The JobWrapper script we use is attached.

采纳的回答

Stefan Harfst
Stefan Harfst 2017-3-10
found a solution:
add the options --overcommit and --gres=none (in case the use of GRESes is configured in communicatingSubmitFcn.m) to the two srun commands in the communicatingJobWrapper.sh script. Eg. for shutdown:
srun --overcommit --gres=none --ntasks-per-node=1 --ntasks=${SLURM_JOB_NUM_NODES} ${FULL_SMPD} -shutdown -phrase MATLAB -port ${SMPD_PORT}
  2 个评论
Brian
Brian 2022-8-25
This thread is 5 years old but I am experiencing this same issue as my organization's new HPC is using slurm (vs SGE) . I am running 2017b and unable to validate my cluster profile and the above edits to the SRUN commands are not resolving this behavior.
Matlab is not recieving a 'finished' signal even though the job goes CG and then falls off the queue.
Thanks for any further assistance.
Stefan Harfst
Stefan Harfst 2022-9-9
If the jobs are completing on the cluster but Matlab is not receiving the finished state, than you are facing a different problem I think. The problem we had was, that some Matlab jobs never terminated because the srun command to shutdown the SPMD server got stuck.

请先登录,再进行评论。

更多回答(0 个)

类别

Help CenterFile Exchange 中查找有关 Third-Party Cluster Configuration 的更多信息

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!

Translated by