Main Content

Offload Polyspace Analysis from Continuous Integration Server to Another Server

When running static code analysis with Polyspace® as part of continuous integration, you might want the analysis to run on a server that is different from the server running your continuous integration (CI) scripts. For instance:

  • You might want to perform the analysis on a server that has more processing power. You can offload the analysis from your CI server to the other server.

  • You might want to submit analysis jobs from several CI servers to a dedicated analysis server, hold the jobs in queue, and execute them as Polyspace Server instances become available.

When you offload an analysis, the compilation phase of the analysis runs on the CI server. After compilation, the analysis job is submitted to the other server and continues on this server. On completion, the analysis results are downloaded back to the CI server. You can then upload the results to Polyspace Access for review, or report the results in some other format.

Offloading workflow: CI Server uploads to another server. Results are downloaded after analysis. CI Server then uploads to Polyspace Access.

Install Products

A typical distributed network for offloading an analysis consists of these parts:

  • Client node(s): Each CI server acts as a client node that submits Polyspace analysis jobs to a cluster.

    The cluster consists of a head node and one or more worker nodes. In this example, we use the same computer as the head node and one worker node.

  • Head node: The head node distributes the submitted jobs to worker nodes.

  • Worker node(s): Each worker node executes one Polyspace analysis at a time.

Note

The versions of Polyspace on the client and worker nodes must match.

Flow diagram showing the configuration of products required for offloading analysis jobs from one server to another. The CI server must contain an installation of Polyspace Bug Finder Server. The servers running the analysis must contain MATLAB Parallel Server and one or both of Polyspace Bug Finder Server and Polyspace Code Prover Server.

Install these products:

  • Client nodes: Polyspace Bug Finder™ Server™ or Polyspace Code Prover™ Server to submit jobs from the Continuous Integration server. Note that you do not require licenses for the Polyspace Server products if you use them only for job submission (with the -batch option).

  • Head node: MATLAB® Parallel Server™ to manage submissions from multiple clients. An analysis job is created for each submission and placed in a queue. As soon as a worker node is available, the next analysis job from the queue is run on the worker.

  • Worker node(s): MATLAB Parallel Server and Polyspace Bug Finder Server or Polyspace Code Prover Server on the worker nodes to run a Bug Finder or Code Prover analysis.

In the simplest configuration, where the same computer serves as the head node and one worker node, you install MATLAB Parallel Server and one or both Polyspace Bug Finder Server and Polyspace Code Prover Server on this computer. This example describes the simple configuration but you can generalize the steps to multiple workers on separate computers.

Configure and Start Job Scheduler Services on Head Node and Worker Node

Start a job scheduler service (the MATLAB Job Scheduler or mjs service) on the computer that acts as the head node and worker node. Before starting the service, you must perform an initial setup.

Specify Polyspace Installation Paths

MATLAB Parallel Server and Polyspace Server products are installed in two separate folders. The MATLAB Parallel Server installation routes the Polyspace analysis to the Polyspace Server products. To link the two installations, specify the path to the root folder of the Polyspace Server products in your MATLAB Parallel Server installation.

  1. Navigate to matlabroot\toolbox\parallel\bin\. Here, matlabroot is the MATLAB Parallel Server installation folder, for instance, C:\Program Files\MATLAB\R2024b.

  2. Uncomment and modify the following line in the file mjs_polyspace.conf. To edit and save the file, open your editor in administrator mode.

    POLYSPACE_SERVER_ROOT=polyspaceserverroot

    Here, polyspaceserverroot is the installation path of the server products, for instance:

    C:\Program Files\Polyspace Server\R2024b

The Polyspace Server product offloading the analysis must belong to the same release as the Polyspace Server product running the analysis. If you offload an analysis from an R2024b Polyspace Server product, the analysis must run using another R2024b Polyspace Server product.

Configure mjs Service Settings

Before starting MATLAB Parallel Server (the mjs service), you must perform a minimum configuration.

  1. Navigate to matlabroot\toolbox\parallel\bin, where matlabroot is the MATLAB Parallel Server installation folder, for instance, C:\Program Files\MATLAB\R2024b.

  2. Modify the file mjs_def.bat (Windows®) or mjs_def.sh (Linux®). To edit and save the file, open your editor in administrator mode.

    Read the instructions in the file and uncomment the lines as needed. At a minimum, uncomment these lines that specify:

    • Host name.

      Windows:

      REM set HOSTNAME=%strHostname%.%strDomain%
      Linux:
      #HOSTNAME=`hostname -f`
      Explicitly specify your computer host name.

    • Security level.

      Windows:

      REM set SECURITY_LEVEL=
      Linux:
      #SECURITY_LEVEL=""
      Explicitly specify a security level to avoid future errors when starting the job scheduler.

      For security levels 2 and higher, you have to provide a password in a graphical window at the time of job submission.

Start mjs Service and One Worker

In a command-line terminal, cd to matlabroot\toolbox\parallel\bin, where matlabroot is the MATLAB Parallel Server installation folder, for instance, C:\Program Files\MATLAB\R2024b. Run these commands (directly at the command line or by using scripts):

mjs install
mjs start
startjobmanager -name JobScheduler -remotehost hostname -v
startworker -jobmanagerhost hostname -jobmanager JobScheduler -remotehost hostname -v
mjs install
mjs start
startjobmanager -name JobScheduler -remotehost hostname -v
startworker -jobmanagerhost hostname -jobmanager JobScheduler 
    -remotehost hostname -v
Here, hostname is the host name of your computer. This name is the host name that you specified in the file mjs_def.bat (Windows) or mjs_def.sh (Linux).

For more details and configuring services with multiple workers, see:

Offload Analysis from Client Node

Once you have set up the computer that acts as the head node and worker node, you are ready to offload a Polyspace analysis from the client node (the CI server running scripts on Jenkins® on another CI system).

To offload an analysis, enter:

polyspaceserverroot\polyspace\bin\polyspace-bug-finder-server -batch -scheduler hostname|MJSName@hostname [options] [-mjs-username name]
where:

  • polyspaceserverroot is the installation folder of Polyspace Server products on the client node, for instance, C:\Program Files\Polyspace Server\R2024b.

  • hostname is the host name of the computer that hosts the head node of the MATLAB Parallel Server cluster.

    MJSName is the name of the MATLAB Job Scheduler on the head node host.

    If you use the startjobmanager command to start the MATLAB Job Scheduler, MJSName is the argument of the option -name.

  • options are the Polyspace analysis options. These options are the same as that of a local analysis. For instance, you can use these options:

    • -sources-list-file: Specify a text file that has one source file name per line.

    • -options-file: Specify a text file that has one option per line.

    • -results-dir: Specify a download folder for storing results after analysis.

    For the full list of options, see Complete List of Polyspace Bug Finder Analysis Engine Options.

  • name is the user name required for job submissions using MATLAB Parallel Server. This credential is required only if you use a security level of 1 or higher for MATLAB Parallel Server submissions. See Set MATLAB Job Scheduler Cluster Security (MATLAB Parallel Server).

For security levels 2 and higher, you have to provide a password in a graphical window at the time of job submission. To avoid this prompt in the future, you can specify that the password be remembered on the computer.

The analysis executes locally on the CI server up to the end of the compilation phase. After compilation, the analysis job is submitted to the other server. On completion, the analysis results are downloaded back to the CI server. You can then upload the results to Polyspace Access for review, or report the results in some other format.

See Also

(Polyspace Access)

Related Topics