As an aside, it's not very efficient to try and benchmark a single objective evaluation, then set a "time limit" by using the number of evals instead for a couple of reasons:
- The method used involves simulation, so the function evaluation time is not stable
- This is being run on a large cluster, and due to some idiosyncracies in the way it is being scheduled the number of cores requested does not always precises match the number of cores in the parpool cluster. This is a problem on our end, but this means that function evaluation time is also unstable in this way as well.
- The cluster is shared, so it's very important that we ask for just exactly what we need. Over-requesting time leads to inefficient server use, while under-requesting will cause the scheduler to kill our process before completion, which would be very bad.
A possibility could be to build in some kind of way to output the solution when the scheduler kills the process, but I'm not clear how to do that, and it's probably not desirable behaviour in any case.