Virtual Commissioning using Simulink Part 1 - Design with Simulation
Overview
Many industries are facing increasingly complex algorithms deployed in production systems to achieve greater productivity. In addition, the cost of downtime for implementing improvements in control software can be prohibitive. Today, companies are tuning to virtual commissioning to address these two issues. Mining companies can leverage virtual commissioning to ensure new control algorithms will be reliable, achieve desired results, and be implemented seamlessly with minimal downtime.
Virtual commissioning uses dynamic models to design and validate algorithms for improved productivity. After validation, these algorithms can be automatically deployed to a simulated PLC, where the code running on the PLC is tested against the plant model.
This video focuses on two aspects of virtual commissioning: design with simulation, and implementation with code generation. You will see the workflow demonstrated using a common challenge across mining companies: a multi-tank level control problem.
About the Presenter
Ruth-Anne Marchant is a Senior Application Engineer specializing in Simulink, and Model-Based Design. Since joining MathWorks in 2015, her focus is on supporting customers adopt Model-Based Design with Simulink. Prior to joining MathWorks, Ruth-Anne worked in the Canadian aerospace industry as a control systems engineer. Ruth-Anne holds a BASc in computer engineering and an MASc in electrical and computer engineering, both from the University of Waterloo, Canada, specializing in control systems.
Branko Dijkstra is a principal technical consultant specializing in Model-Based Design workflows for process industry optimization. Prior to joining MathWorks, Branko was an engineering manager for the development of automotive climate control and electric vehicle thermal management systems. Before that, he worked in the microlithography industry. Branko received his M.E. based on his work modeling a batch crystallization plant. He received his Ph.D. in control engineering (microlithography) from Delft University of Technology, the Netherlands based on his thesis Iterative Learning Control Applied to a Wafer Stage.
Recorded: 15 Sep 2020
Let me tell you about a recent experience my colleague shared with me at her recent visit to a mining site. During her visit to the site, the entire plant was brought down to a stop. Production halted. The operators were scrambling to identify the cause. And after about 15 minutes of scrambling, the team realized an operator made a programming mistake in the DCS, causing the DCS to crash.
From there, it took another 45 minutes of work to get the system back up and running again. While an hour of downtime may not be significant in many industries, in the mining industry it can translate to large sums of money due to lost production. And what if I told you that there was a way to reduce the risk of these events, even as these systems are becoming increasingly complex? Over the course of this session, you'll see how.
I'm Ruth-Anne Marchant, Senior Application Engineer at MathWorks. Today is part one of a two-part series on virtual commissioning where you'll hear about how a simulation based approach can reduce the risk of production loss and downtime, even as systems become increasingly complex.
A 2017 white paper on digital transformation in the mining and metals industry, prepared by World Economic Forum in collaboration with Accenture, outline key challenges that the industry is facing. I've listed these key challenges on the slide, here, and grouped them into three. The first group focuses more on global challenges. The second grouping focuses on challenges specific in the industry. And the final challenge is related to workforce and personnel.
Now the report found that the industry is turning towards digital transformation to help address some of these challenges. And some of these initiatives include autonomous operations and robotics, smart sensors, the convergence of IT and OT, advanced analytics and simulation modeling, and AI, or artificial intelligence. Now these initiatives are grouped into four common themes which you see here, autonomous robotics and operations hardware, digitally enabled workforce, integrated enterprise platforms and ecosystems, and finally, next generation analytics and decision support.
Now while MathWorks technology can support a range of the initiatives listed here, today we'll focus on autonomous operations and robotics and advanced analytics and simulation modeling. Let's further define these initiatives. Autonomous operations and robotics can be described as deploying digitally-enabled hardware tools to perform and improve activities that have traditionally been carried out manually or with human-controlled machinery.
The idea here is to eliminate the need for human intervention in nondecision-making functions or to further improve process control and stability. And digging into the advanced analytics and simulation modeling initiative, this is about leveraging algorithms to process data from sources to provide real time decision support.
So why these two? While more and more we're having conversations with customers who want to do things like increase their system throughput, minimize their energy costs, and maximize the yield. Not only do we hear more customers talking to us about these types of projects, but a recent report by McKinsey supports what we've seen anecdotally.
The report from 2018 focuses on digital opportunities in metals. And one of the use cases that the report highlights as delivering the largest impact is related to yield, energy, and throughput. The report calls out benefits such as strengthening process control, boosting plant profits by optimizing processing parameters. These types of projects can fall within both the simulation modeling and autonomous operation initiatives that I mentioned earlier. However, these types of digital transformation projects do have their own challenges.
And to highlight these challenges, I'd like you to imagine this. You are tasked with upgrading an algorithm in one of the processes of your plant with the goal of optimizing the process. One challenge you could face in this project is a deluge of data. These days, systems are containing more and more sensors. You could have sensors related to motor torques and currents, belt rates, loads. You could have multiple sensors at a single location for redundancy or maybe a smart sensor that estimates a measurement when certain conditions are met.
The sensor data may be located in disparate sources. Maybe some of the data is saved in operational technology platforms like Pi Historians. Or maybe some of this data is saved as streaming data or in data stores like AWS or Azure. And maybe some of your data saved in files like CSV file. The increased number of sensors and data saved in disparate sources are two examples of how digital transformation adds complexity to projects.
Now with all of this complexity, it's harder and harder for new operators to achieve the same level of productivity out of the system as more experienced operators. And as a result, the system does not always perform as optimally as it otherwise could, and productivity can be variable across your operators.
A second challenge you could face in your project to optimize one of the plant's processes relates to testing the algorithm you created using the data you have. , Ideally you'd like to test in some sort of simulation environment first, then on a computer. Once you have an algorithm, it is important to test it. You want to test it across the full range of expected operational scenarios, both normal and expected operating conditions and unexpected or failure conditions.
Now to fully test the algorithm, you want to understand how it will perform when it's connected to the real physical system. This is different than testing the algorithm in a virtual or soft PLC, because it includes testing how the algorithm will interact with the system that it is controlling.
That said, it's often impractical to use the real hardware as the first testing ground for an algorithm. So for example, testing a new unproven algorithm for the first time on real hardware could break your hardware. And testing will take the equipment or process out of service for a period of time. And maybe the tests are too risky to carry out on the hardware, too, to safety concerns.
As a result, more often than not, algorithms are not tested sufficiently. And a potential consequence of that is that the system could detect an issue when there isn't one and cause the system to shut down. Or worse, it could break your expensive equipment, both of which result in costly system downtime not only due to the cost of replacing equipment, but costly due to lost production revenue.
At this point, what do you do? You still have this project to optimize your processing system. And you know that this type of project I just described includes these risks of suboptimal performance and lost production revenue. How can you reduce these risks? You can use a simulation-based approach.
Using a simulation model of the plant, you can design, test, and validate algorithms and changes and upgrades before deploying to production. Before implementing and testing any changes on the physical system which can be risky and impractical, you instead use a simulation model of the system. In addition to the algorithm under development, this model also captures the physical system dynamics, and the environmental models, and other system components.
You can leverage existing sensor data to ensure that the model is an accurate representation of reality, thereby allowing you to perform virtual tests by simulating the system on your computer. With the knowledge that you can reduce the risk of you causing costly system downtime, you are ready to get started.
So this simulation-based workflow has three main components. First is desktop simulation where you model your system dynamics, including physical components. So think the hydraulic, mechanical, and electrical components as well as environmental components. So for example, your operator inputs or maybe obstacles that are in the environment. And then the model also contains the algorithms. So for example, maybe PID loops or logic to control different system operating modes.
I'd like to point out here, that this is different than the typical scenario where the PLC software is simulated on a soft or virtual PLC and emulating the IOs. Typically in that setup, there's no model or representation of the physical system. Whereas in this desktop simulation component, the simulation includes not just the algorithm, but also the dynamics of the physical components and the environmental components.
The second component is automatically generating code of the algorithm, and the third component is to perform virtual commissioning tests. This is where you have your algorithm code running on a virtual or soft PLC within your PLC vendor environment, and your dynamic plant model running in Simulink. And the two are connected, talking to each other.
So I'll just make a comment here that using automatically generated code is not a requirement for this to work. If you have existing code, you can use it in this way as well. So with this overview of the three components, you're keen to get started focusing on the desktop simulation piece. And we'll focus on this component for the remainder of the session. And part two will cover the other components. So at this point, maybe you're asking yourself, what does this look like? How do you do it?
So let's dig into this project a little bit more that you're working on where you are trying to optimize one of your system processes. The example we're looking at here is going to be a flotation cell circuit. You are tasked with finding an algorithm to control the level of the fluid in each of these tanks such that the interaction between the different tank is minimized.
This, it turns out, is quite a complex task due to the high interaction between the process variables. A control action implemented at any point in the flotation circuit tends to be transmitted to both upstream and downstream units. Moreover, large variations in the flow rate to the first cell and varying composition of the raw, or also can be problematic.
So instead of developing and tuning a new control algorithm on the real system, which can lead to the problems listed earlier, we turn to a simulation-based approach to design a level control strategy. As a starting point, you create a model in Simulink which is a block diagram environment for modeling and simulating dynamic systems. The model contains the cells and the liquid inside the cells. The model also contains the valves that connect the cells. These valves can open and close based on an input signal. The model contains the effects of gravity as the cells are vertically displaced from each other. The model also contains the existing control algorithms and any environmental effects.
Now you want to ensure that the model of the system is as close to the real system as possible. So you can bring your sensor data into MATLAB. And this sensor data can be used by Simulink to validate the model of your physical system. So in this way, you have a simulated version of the real system that is an accurate representation of reality. And it contains both the physical components and their dynamics as well as the algorithms.
After you validate the model using sensor data, you get to work on the algorithm improvements. This is where you can also include the knowledge of your experienced operators. You can develop and refine the algorithm by capturing their inherent knowledge of the system, which in turn can also improve the performance of the system across a wide range of operators. Effectively, every operator has the opportunity to be an experienced operator.
Each improvement you make, you test the algorithm by running a simulation and observing the results. This is where you can start to test the algorithm across a wide range of operating scenarios and conditions. And effectively, this is the beginning of your improved test capabilities intended to help you ultimately reduce the risk of costly system downtime.
The first time you simulate, it's unlikely that the performance will meet your requirements. Therefore, you can iterate on the algorithm, run the simulation again, and observe the new results. So you can quickly iterate, in this simulation environment, test by simulation along the way and improve your system performance. So what does this really look like? Now let's go into Simulink to find out.
So now, we're in MATLAB. We're going to see how this can be done. I'd like to start by opening up the Simulink start page. To do that, we type Simulink into the command window, press enter, and we get the Simulink start page. For those of you who are new to Simulink, this is a really great place to get started building up your Simulink models. You can start from some blank template models, or you could also explore the examples using this tab here.
So I open up a blank model, click create model. This gives me a blank simulated canvas. Just open it up a little bit more here. Simulink is a block diagram environment for modeling and simulating dynamic systems. You can open up the library browser, here, to access the blocks. And you can build up a model by dragging and dropping blocks from the Simulink library browser into your canvas.
So let's say, for example, I want to pull an integrator. So I drag and I drop an integrator in here. I can get a block from the sources tab. Maybe I want a step input. So I got step. Drag and drop in here.
Alternatively, if you know the name of the block that you're looking for, what you can do is you can click on the Simulink canvas and start typing the name of the block. So I'm going to look for a gain block. And this allows me to search the database of Simulink library blocks and quickly insert them into the Simulink canvas.
Can line it up, the blocks up. You can see I've got these nice blue helper lines, here. And clicking on the lines will allow you to connect from one spot to another. So in this way, I can start to build up a model of my system. There's a number of ways of visualizing your results. One way is a scope, so I'm just typing scope, here.
If I open up the scope here, you can see, here, I've got the scope over on the left hand side. And pressing the green run button up at the top runs a simulation of the system. This is a fairly simple dynamic system, but it's there to help you get started for those of you who are new. Let's close this and go right into the model of our flotation circuit.
Here is the model of our multitank flotation cell circuit. It consists of five cells or tanks. I'm going to zoom in on one of them so we can see it in a little bit more detail. Let's zoom in on tank number three. I have a subsystem here to represent the dynamics of the tank. I can go underneath this subsystem. And see, I have this block here, labeled tank, where I am having fluid come in and out of it through these ports here. And I am taking a measurement of the level and the volume of liquid inside the tank by going back up to the main level of our model.
One of these ports is connected to a valve. And the valve is controlled using our control signal. We'll get to that in a minute. And then, the fluid is transported through the valve into a pipe. And this is there to represent that elevation change. And this will include the effects of say, gravity, in the system. And then I have five of these combinations of tank, and valve, and pipe to represent the multitank plant.
And then, once you have this model of your dynamic system, as I mentioned in the slides, it's important to validate against real world data. You can do this by bringing in your sensor data and running parameter estimation techniques to automatically tune your parameters so that the output, for example, the tank level in this example, or the tank volume, is an accurate representation of what the tank level would be for those same corresponding inputs in your system.
Now I want to show you a few things that you can do with this model now. Now, what I want to do first here is test it in open loop. I want to get a sense of how this system behaves without any control algorithm. And this is helpful as you start to build up the model of your system. It's important to take incremental approaches as you're building these models instead of doing it all in one shot.
So what I have under this block, here, is that multitank plant model. I have an inlet flow rate, which is fed in by some parameter here. And I can even specify the units. And then, I have these valve commands. So what this is doing is it's specifying to the valve whether or not it's open or closed and by how much. And I have one of these for each one of the valve commands.
And then the output from this block of these signal values here, which are measurements of the tank level. So each of the five tanks has its own level measurement. And the result is plotted in this scope, here, which I have got over on the left hand side. Now when I run a simulation, you run a simulation by pressing this green run button up at the top, I'm going to run a simulation and observe the results in the scope.
What you're looking at, here, is the five different tank levels. Now for this simulation, all of the valves are closed. You can see that I've multiplied my port diameter by 0, which represents a closed valve. So what we would expect in this case is for there to be no change in the level of any tank except for the one at the top, because there's a constant flow into that tank. And indeed, that's what we see in the plot, here, where the yellow line is the top tank of the one at the top. And the level of this tank is gradually increasing while the other four tanks remain constant.
So what happens if I want to open up the valve? I'm going to just test what this result looks like. So I'm going to open up the first valve fully, rerun the simulation, and observe the results. Let's just make this bigger here. So this is a different result as you would expect from the first simulation where all the valves are closed.
So in this result, what we're seeing is the level of tanks three, four, and five, the orange, green, and purple are constant. But we have the valve open for tank one. So we would expect that tank two would gradually increase, which that's the blue line and that's what we're seeing. Well, the level of tank one is decreasing, because it is now feeding into the level of tank tubes. You can see that the effects of gravity are also being taken into account in this model.
So I'm going to stop here for the plant model and move now into the part of integrating a control algorithm. So now, here we have another model. And this model contains both the plant model in this subsystem, here, and a subsystem to represent the control algorithm. And that's here. And they're connected together as you would expect them to be connected in a standard closed loop feedback system.
So we have a level set point, here, and we're going to adjust the level of the first tank compared with the measurement of all of the levels for each tank. And we have a plot, here. It represents the, communicates the measured output, here. If I just double click in here, you can see that this is effectively the same plant model that we were looking at before. You see that here.
And then if I double click on the controller block, what you can see, here, is I have a PI controller on each of the control channels. So for each of the level controllers, there's a separate PI controller. Now, let's say this is now aligned with the existing system. So if I double click on this block, and say this kp1 and ki1 values are set to what they are set to in your existing system. And what you can start to do is test the performance of your existing algorithm within this simulation environment.
So I can run the simulation pressing the screen run button. I'll open up the scope so you can visualize the results. So here, we see that it's a little bit more interesting plot. The top of the plot is showing the level of the liquid in the tank for each of the five tanks where we have yellow is the first tank, blue is the second tank, all the way down to the purple, which is the last tank. The bottom plot is showing the error between the level set point, just set in this block over here, so open that up, and the measured output.
What we want this to be is close to zero as possible. So what we're seeing in this plot is that although we are changing the set point to the level for tank one only, so that's right here. All the other numbers of the same. It still has this pretty big effect on the other levels. So you can see the error for the tank level in tanks two through five. It does not remain at zero. So here's this cascading effect. And it's a little bit even more pronounced in the level measurement. We can see that as the level of the first tank goes down to meet this lower set point, each of the subsequent level measurements does have this overshoot thing.
So what we want to do is reduce the overshoot that we're seeing here by updating the control algorithm. So how can we do that? Well, there's a couple of approaches you can use. And you can certainly start to bring in operator experience, some of your really experienced operators. And they have a really good sense of how they can adjust the control parameters in these blocks to reduce that overshoot.
But like I mentioned, this is actually a fairly complicated problem. And instead of manually tuning, there are ways to automatically tune these parameters to say meet a set of requirements. I want to show you how to do that now. Going to open up an app called the Response Optimizer. I'll load a previously saved session just for the sake of time.
What this tool does is allows you to specify a set of requirements on your simulation output. And then, also define a set of variables that you want to know. So in our case, this is probably going to be those PI parameters. And then, the tool will automatically tune these parameters such that your requirement, that you specified, will be met.
So what you see in the plot here, for this example, is an upper bound and a lower bound on the level of tank one. I've pre-constructed this, previous to this session, and in that preloaded file that I started with. And what you can see in the plot, here, is a yellow region and a white region. The yellow region is where we do not want the measured level of tank one to live. We want that measured value of tank one's level to be within that white region.
What you're seeing with the blue line is the simulated value of tank one's level for the control parameters that are operating on the real system. So what we want to do is automatically tune our ki and kp values such that this blue line lives somewhere in the white region, the permissible region. And if that happens, or when that happens, then we know the requirements are met.
Now that I have this configured, I can press this green run button to optimize the parameters. Just move this over here so we can watch it in action.
So while this optimization algorithm is running, what this is doing is it's running a simulation, and then another simulation, and then another simulation. Each time the simulation runs, it evaluates the output of the simulation against the requirement that I defined. And it's looking to minimize the error between the output and the requirement that's defined. And it does this by automatically adjusting the PI controller gains.
You can see, here, in the plot that the simulation is running again and again. And each time, the simulation results changes a little bit.
So at this point, the optimization has finished, and the algorithm knows that it has finished once it has either met the defined requirement or is as close as possible. So expanding on the simulation results for the final values of our controller gains, we can see a performance improvement relative to the initial controller gains that we saw. I'll just go back to our Optimization Response app to show that for the most part, our requirement has been met.
So what we've shown with this example is that with Simulink, you can model the dynamics of your entire system. So that includes the model of your physical system and your algorithms. And you can connect them together. Then, you can start to test what the response is like in simulation. And you can run it through a whole range of tests for your different operating conditions that you may not otherwise want to test on your real system.
And then, you can also use this environment to improve your control algorithms. And in this example, we focused on improving the performance of the level controller in this multitank plant flotation circuit. Let's go back to the slides and continue with the presentation.
So in this example, you saw how to apply the design with simulation component to update a control algorithm for reduced interaction between the tank level in a multitank flotation cell circuit. Going back to the digital transformation initiatives presented earlier, you can use a simulation-based design approach and simulation to do other things as well. And I've listed some of them on the slide here. Let's go through them one by one,
One thing you can do is build an optimal control strategy to maximize profit. So for example, maximize throughput, or yield, minimize energy cost. I have an example of that a minute. You can also use a simulation-based approach to perform trade-off studies. For example, studies such as adjusting input flow rates, level set points, component sizing if you're in the development stages of your project.
You can also integrate AI algorithms. For example, an algorithm to monitor the condition of the valve health or an algorithm to predict when the component needs maintenance. The last use case that I want to highlight, here, is that you can reuse the model as a training simulator where you can train your new operators how to use the system in a safe, simulation-based environment. And this approach is used in other industries such as aerospace, rail, and ships.
So you can leverage a simulation-based approach to take on these initiatives while also managing complexity and augmenting your test capabilities, which in turn, reduces the risk of expensive system downtime. Now, let's look at some examples of companies who have used this simulation-based approach and achieves business benefits.
This first example is from Tata Steel. Tata Steel is one of the top global steel companies with operations in 26 countries. It uses up to 200 cooling towers at its plants across the world. These cooling towers are expensive to run, and they consume a lot of energy. Moreover, they're pretty expensive to modify through retrofitting. Tata Steel, we're looking for a way to reduce their operating costs by way of reducing how much energy is consumed by the process.
Instead of retrofitting the physical system, which is an expensive option, they asked themselves is there a way we can do this by modifying the control software? Well, what they found was the short answer is yes, absolutely. And how did they do it? Tata Steel engineers used a simulation-based approach and built a model of the system. They validated the model using sensor data from the real system. Then, they applied optimization techniques to tune the parameters to minimize the consumed energy. And as a result, they were able to save 40% of the energy using this simulation based approach.
Now, if you ask me, 40% is a considerable improvement achieved through software changes alone. Let's look at a second example. This second example focuses on minimizing tests from hardware. Baker Hughes is one of the world's largest oilfield services companies and provides oil and gas industry with products and services. They were looking to improve the quality and precision of algorithms used in their oil and gas drilling equipment. You see a picture of that on the right hand side.
Precise steering control is required during drilling, so algorithms must take accurate measurements, even during intense vibrations that were expected during operations. Moreover, field tests are both difficult and expensive. So Baker Hughes turned to a simulation-based approach to develop these algorithms. They created a Simulink model that included the environment, sensor models, and other electrical and mechanical components, as well as their existing algorithms.
Then, they tested the existing algorithms in simulation. They created test cases to replicate drilling scenarios and ran these tests in Simulink. They used the results of these tests to inform decisions on how to improve the algorithms. The Baker Hughes engineers tested the improved algorithms, again in simulation, reusing previous test cases over and over again through the development cycle. And as a result, they were able to reduce expensive field tests.
This quote here from the Signal Processing and Controls Functional Manager says it all. A single field test can cost more than $100,000. And even at that cost, does not replicate the complex scenarios our customers encounter. Simulations and HIL tests with model based design enabled us to simulate realistic conditions and conduct fewer field tests. So what would be the impact to your business if you could reduce field testing while also maintaining, or even improving, the quality of your algorithms?
At this point, we've discussed common challenges across the mining industry and the digital transformation initiatives intended to tackle these challenges. We've discussed some common risks associated with these projects and how a simulation-based approach can help you manage these risks. Finally, we've shown some examples of others who have successfully achieved benefits as a result of a simulation-based approach.
The key points that you can take away from this are that you can use Simulink, a simulation-based approach, to reduce the risk of costly system downtime, testing your algorithms and simulation first. Simulation-based tests effectively augment your test capabilities and reduce how much testing needs to happen on the hardware. This, in turn, can reduce unwanted system downtime.
The second point is that you can improve the performance of your increasingly complex system. You can leverage sensor data to build a representative model. You can improve the performance and consistency of your system while also incorporating the knowledge of your most experienced operators. As we wrap up, recall the components of a simulation-based workflow, desktop simulation, co-generation, and finally, virtual commissioning.
Today, you heard about desktop simulation. In the second of this two part series, you'll hear about co-generation and virtual commissioning. So where can you go from here? Well, first, you are invited to join us for this second part of this webinar series on virtual commission with Simulink. Next, we'd love to see you at our other webinars in this mining webinar series. And I provided the links to those two items on the slides, which you will receive at the end of the presentation.
And finally, if you have a project in mind you want to explore how to apply what you saw today to your specific application, please get in touch with us. I've got the contact details up on the screen here. We can help with the guided evaluation or proof of concept. At this point, I'd like to close. Thank you very much for your time. Have a good day.