Video length is 39:51

Control Loop Performance Monitoring with MATLAB

Mehmet Yagci, NAPCON

The operation of modern process industries is based on using a great number of control loops implementing a variety of control structures. Industries mostly prefer PID controllers because of their simplicity to implement; however, even perfectly tuned control loops deteriorate with time because of different reasons that can lead to degraded process operation. Additionally, problems with the regulatory layer can nullify the benefits of APC systems and real-time optimization. In some cases, performance degradations can be observable by the operators or process experts; however, it is difficult to monitor the control loops manually by visual inspection.

Achieving Industry 4.0 requires the diagnosis of poorly performing control loops before they cause larger problems. Diagnosing these issues requires a strong fundamental knowledge of controls, process understanding, and computational skills. The results must also be presented in a way that is easy to understand for different operation levels. This video explains how to combine these skills to create an automated system for control loop monitoring and generate significant economic benefit for an organization.

Published: 12 Jan 2021

So good morning, good afternoon, or good evening, wherever you are joining this webinar. Greetings, everyone, from Finland. As Joris introduced, my name is Mehmet Yagci, and today, I'll be talking and sharing my experiences about control performance monitoring. So before my presentation, actually, I'd like to thank all of the MathWorks team there. So they are actually the ones making this webinar session possible. So I would like to thank you all before starting the presentation.

So today, we will discuss an application project which also was researched in my master's thesis and applied to a refinery a few years ago. But to clarify the story behind that, first, I would like to shortly introduce myself again. Currently, I am working as Lead Process Control Engineer in NAPCON, which is a part of Neste Engineering Solutions, actually. So it has been almost seven months since I joined NAPCON.

And before here, I worked as Modeling and Process Control R&D Team Leader at Turkish Petroleum Refineries for almost seven years. There, I also did my master's thesis. And the thesis project was today's topic of control performance assessment. We developed an application for my previous company. So the webinar session today will cover the experiences that I had during that master's thesis project, which also overlaps with my previous employer.

So some of you might already realize that I have two different names in social media platforms. That is basically because I have a dual citizenship from Turkey and Bulgaria at the same time. Actually, I was born in Bulgaria. But all of my life until now, actually, was spent in Turkey. So anyway, I just put my LinkedIn information at the bottom-left corner. So feel free to connect me via LinkedIn if you'd like to discuss something more, let's say.

Before going further, I also would like to mention briefly about my current company, NAPCON. So NAPCON actually is a business unit of Neste Engineering Solutions. And Neste engineering solutions is also part of Neste, which is one of the global leaders in renewable fuels. And in NAPCON, we are more than 95 professionals working at different product families. And up to now, 170 applications have been delivered to different industries in 11 countries. So I should say NAPCON is a major organization, because it is in action since 1986.

So NAPCON business areas can be divided into three subcategories. Improve is the product family where we provide advanced process controls and different optimization solutions. Train is the product family where we develop simulators for processes and for operator trainings. And finally, in the Understand product family, we have advanced analytics solutions which have the ability to analyze the process data and deliver them as visuals for inspection of domain experts. So each category actually has brought some software solutions and some consultancy services.

So today, our agenda will be like this. So I like this way of presenting the ideas where we first ask some questions to ourselves and then try to answer them at the same time. But since the time is limited, actually, we will not have a chance to touch upon everything in control performance assessment philosophy or applications. But today, we will try to give some general idea. And we'll try to share some tips and tricks in this journey-- what can you, or what kind of tips and tricks you have in this kind of application, let's say.

So let's first start with asking the most essential question. So why is the control performance assessment needed? I suppose most of the attendees of today's webinar are pretty much experienced with process control. But actually, I would like to start with the basics of process control here.

So if you have a process, and you want your process to be safe, sustainable, reliable, low cost, able to produce in-spec products and highly flexible, you must certainly have a process control system to make sure that you satisfy these needs, actually. And if you have a process control system, this means that you have many components in your control system such as transmitters, sensors, actuators, controllers, and so on, and so forth. But it is quite natural phenomena that the more component you have, the more effort you need for maintaining these components.

But, OK, if we talk about process control, I also would like to show this famous control hierarchy diagram. Actually, that shows what kind of control layers we have in a typical control system and how they talk to each other so that we can have a better understanding of why control performance is actually needed.

And as you see, actually, it starts with the field devices like valves, transmitters, at the bottom and goes up to the management level. So at the management level, you have some business objectives. And these objectives are translated into control objectives as you move through different layers.

This means that actually in each of the layers, you must have a perfect system to translate the upcoming objectives. And normally, process industries are focusing more on the complex systems like advanced process control and the other intense and rewarding control systems like real-time optimizers or the others.

However, we mostly forget that the control system, actually, which translates the outcomes of these systems directly to the field, is the base layer control or regulatory control, as you can see at the bottom of the figure. And in today's world, actually we know that almost 95% of all controllers in the industries are basic PID controllers. And that is the one actually that has the direct connection with the field.

So if we are talking about optimizing the processes, we need to make sure that we have a reasonably well-performing base layer control. So talking about base layer control, I would like to show this simple feedback control loop which shows the components in a typical feedback control loop. But we will come back to that in the following slides.

So at that point, you might ask, what is the difference between good and bad control? So the answer is simple, actually. The trend on the left shows a process with a high variation. So just consider the limit as the capacity of a production. And the more you utilize your capacity, the more throughput you get, actually.

And if you have a bad control, this means that most of the time, you are far away from the maximum throughput. And even sometimes, you are exceeding the limit, which might cause some reliability problems in your process control loop or in your unit, let's say.

On the other hand, if you increase the performance of your control system, you will have less variation. And then you can shift your process towards the limit so that you can get more throughput, or you can increase more and benefit more, let's say. In other words, if you increase your control performance, you will have less variation in all of these-- the control error, the process output, the control movements, and also the maintenance actions you needed.

Of course, these benefits are translated into numbers. So I would like to share some general and average statistics with you to get a better understanding of the benefits of performance assessment. So the survey states that each control engineer-- by the way, these are the average numbers. But each control engineer in a plant is responsible from 450 control loops, a very high number.

Of course, there are many more than that, but there are seven main types of problems faced in the control loops. And they are basically some hardware problems, like control valve issues which needs maintenance at the field, and some soft problems, let's say, where you can fix the problem by just sitting in front of your control system, like these tuning problems, let's say.

And also, it was observed that each control loop requires one hour of investigation to find out whether it has a problem or not. And most of the industries today have different kinds of advanced process control layers upon the regulatory layer. So we need to remember that 25% of the benefit of APC comes from the regulatory control side.

And of course, it depends on the process and the industry you're operating in. But that's, again, an average number. So if you increase the performance of your controller, it will lead you $10,000 of saving. By the way, that's an average number. And depending on the conditions, you might have less or more.

So I also would like to show another figure, which shows the benefit of efficient maintenance. And with no maintenance, so let's say with basic monitoring, you lose the benefit of your controller within the time because within the time your controllers starts to degrade, they lose their performance.

OK, up to now, we talked, I think, enough about the benefits and the needs about how we can assess the control performance. That's the next question. So normally, there are, let's say, mainly five steps in performance monitoring processes. So I would like to go through each of them briefly.

The first and the most essential step is to collect data. So there, you need to store as much data possible. But there are some musts and nice-to-haves there. So sensor readings, setpoints, and controller outputs are essential for performance monitoring. But if you collect more variables like control modes, or tuning parameters, and the operator actions, you might have much more information about the performance of your controllers.

The second step is a measuring step where you need to select or define a benchmark to compare your performance. So in that step, you may need to collect some prior information, like process lists. But we will go back to that issue in the following slides.

The third step is the identification step. And in this step actually, there, you rank your controllers based on their performances. And you should highlight the poor-performing ones to prioritize your maintenance efforts. And there, also you can create some alarms based on the KPIs you calculated, create some notifications, or create some periodic reports to see the top performance so that you can efficiently allocate your maintenance resources there.

The fourth step is the diagnostics step. In this step, you just pinpoint the reasons behind the poor performance. Actually, there, you need to choose the correct identification method, or let's say methods, for each of the problem. But if there is no available methods in the literature, or let's say, in somewhere else, you can develop your own identification method to diagnose the root causes there.

And the fifth and the final step is the suggestion and maintenance step. So in this step, you just give some advices for process control engineers to ease their decision-making processes. And there, you can also-- I mean, you can even automate some maintenance actions like pushing some optimum values of the tuning parameters directly through to your control system. That's also possible there.

So at the moment, you might say, OK, a process of performance assessment seems to be tedious. Actually, in an aspect that's true. But there are some very basic and very simple statistical metrics in which you can avoid some complex calculations. And the most fundamental ones are mean and the standard deviation.

So normally, your controller tries to minimize the control error as much as possible. But in ideal conditions, you should have 0 mean and 0 standard deviation. But in ideal conditions, it's not possible due to some measurement noises or some disturbances directly affecting your control loop.

And we can look at some examples there. So the process on the left has higher mean and standard deviation compared to the process on the right, which means it has a worse performance than the one on the right because its control error of-- the mean of controller error and the standard deviation is high compared to the right figure, let's say.

But one quick note here is that instead of looking at the single indices, actually, you should get trends of them so that you can identify the possible degradations over time. Sometimes, it's not meaningful to have single index of, let's say, control error or standard deviation of the controller. So you need to look at the values over the time to see whether your control performance is decreasing or not.

You probably remember that there were some nice-to-haves in the data collection step. So control mode was one of them. That is very simple to calculate, but very informative. So here, you can see that the performance on the left-- the process, actually, on the left has almost 30% of manual time during the lifetime of the controller, whereas the process on the right is fully in automatic mode. I mean, by just simple data collection, it is also possible to see the performance of your controller.

But also, it is also possible to drive some, let's say, more advanced techniques, actually. In fact, they are not advanced, but a derivation of some basic statistical metrics, let's say. So one example is the ratio of standard deviations of control error and the control output, actually, which shows how much of the variation in controller error is caused by the control output.

So the lower the value, the more possibility you have the control valve problems. So here is an industrial example again. The process on the left has a lower standard deviation ratio, in which we can suspect about some control valve problems there.

If you would like to investigate more in the performance, actually, the best option is to use a benchmark. And in the literature, different benchmarks are available, like perfect control, minimum variance control, best PID, best possible PID, or open loop control. So it depends on what kind of control objective you have in your system.

But simply, what you need to do is actually, to calculate the performance of your controller, is just the calculation-- a division of the index calculated for the benchmark and the same index you calculate for your actual controller.

So within those benchmarks, actually, the most popular one is the minimum variance control. So it was first suggested by Astrom in 1970 as a controller which tries to minimize the process variation as much as possible. But then in 1989, Harris developed that idea to use that minimum variance control as a benchmark for the controllers.

So there, as we discussed a couple of seconds ago, what you need to do is to actually compare the index for both theoretical benchmark controller and your actual controller. So here in this case, for the minimum variance control, it is the variance, this index. So this means that the less the ratio is the worse your controller. So because the index is actually bounded between 0 and 1, so that also makes it easier for you to compare your different controllers with each other.

But you can say that, OK, variance of the actual controller is pretty simple to calculate. But how about the MVC controller and the variance of the MVC controller? It is also easy, actually, if you know how to estimate your process with an autoregressive model. There is much more theory behind this, but simply with MVC controller, the minimum variance you can reach is the noise of your process after you make your AR model, AutoRegressive model.

So then by just modeling your process with an AR model, the variance of residuals becomes the achievable minimum variance of the controller. And also, you remember that in the data collection in the identification and benchmark selection steps, I said you might need to collect some a priori data about your process, prior information about your process. And here, it is the process delay that you need to consider in this case.

So what does that mean in MATLAB, actually? OK, you can use the functions of arima or ar to build your model. It's pretty simple. And the functions estimate or infer to estimate the residuals of your model. So you don't need to worry about writing, let's say, a couple hundreds lines of code in there. So what you need to do is to apply those functions to your data set.

And here is an example from an industrial data again. I mean, you might say that the process on the right seems to have less variation in magnitude compared to the process on the left. But the performance indices actually, controller performance index, tells the vice versa. So the process on the right actually is worse than the process on the left in terms of this performance which is calculated by using the minimum variance benchmark principle.

And sometimes, if you are working in the field, you might have heard some statements from the experts or the operators like, OK, let me show the process trends. I will tell whether it is performing well or not. I mean, that might be valid with the broad experience for some of the controllers for some of your colleagues. But remember the number of controllers for each engineer. So that's a tedious task. That's why actually we need to apply some control performance assessment techniques there.

So now you know how to calculate the performance. But how about the reasons for performance problems? That's the next question. So you will recall that there were seven main problems facing the controllers. And we are lucky that almost all problems leave some kind of signals or fingerprints in either time domain trends or frequency domain trends.

So in addition, we are also lucky that most of the problems can be translated into two basic signal properties which are oscillation and nonlinearity. So mainly, control tuning and disturbance propagations reveal themselves as oscillations in the trends that we observe, whereas the sensor faults and the control valve problems reveal themselves as both oscillations and some nonlinearities there.

So first, I would like to talk about oscillations. These are, let's say, the most common problems that we observe, or the most common fingerprints, let's say, that we observed in the data, let's say. So the figure you see shows different oscillation patterns in the control systems. So remember the statement of your experienced colleague claiming that by just visual inspection, he or she can identify the problem.

So for such case, it is getting much more difficult because here in this figure, each trend has its own problem like this. So it can be tuning problem, or sensor fault problem, or stiction problem, or something else. So they all reveal themselves as fingerprints, oscillation fingerprints in the data. But it's not an easy task anymore to look at your data by just visual inspection and try to find out what is the problem with-- what is the problem in your control loop there.

So the easiest way to detect an oscillation in a control loop is to look at the periodogram of the signal. So a periodogram actually is based on Fourier transform, which decomposes the signal into its frequency components. And in MATLAB, we can just use this periodogram function for that purpose.

And let's look at an industrial example here. So as you can see, the oscillations are almost impossible to see in the time domain. But if you plot the periodogram of that signal, we will see that a component at 877 seconds is just dominating the signal with 3.7% of the power. And maybe you might say that, OK, that's not that big amplitude in terms of the oscillation.

But we will discuss that later. I mean, I have an industrial use case, the case study, which you can see that just little oscillation, which you cannot observe just by looking at the visuals or the data there, might cause big problems there.

Another way of analyzing the oscillations is actually the autocorrelation function. So basically, it calculates the correlation coefficient of a signal with itself in different lags. So in MATLAB, you have also a function there, this autocorr function that actually does it for you, does this analysis for you.

So for the same process that I showed in the previous slide, you can see the oscillation pattern in the autocorrelation plot. I mean, it is also possible to characterize the oscillations by just looking at the plot. But I will not go into details of that. So there are some methods available in the literature to characterize and create some KPIs out of this autocorrelation function that's also possible.

It is also possible to analyze the patterns of oscillations in the time domain trends. So for example, you can apply a pattern recognition approach which calculates the similarity of positive and negative areas in the controller. Then actually, you can identify some kind of threshold for the similarity. And if your KPI-- the calculated index exceeds that ratio threshold, you might say that you have an oscillation in your system.

But if you have a process like that, you can easily identify the oscillations. But of course, you need to remember that kind of time domain method where you just compare the areas there, you just compare the similarities there, is prone to measurement noise because if you have a high measurement noise in the system, you could not have that kind of, let's say, perfect pattern that you can easily look at and find out and say that there is an oscillation there. That might not be possible if you are applying a time domain. And it is kind of similar to pattern-recognition-based methods.

Another important point actually was to detect nonlinearities. You remember that in the beginning of this section. So one of the most common nonlinearity is actually the quadratic phase coupling. So this example comes from the literature, actually from the inventor of this method, Choudhury and his friends. So if you have a phase coupling in your process, you have two frequencies that are dominating the signal, but also a coupled-- another frequency of f3 in your system.

So to clarify the issue there, I want to show you an example. So we have two generated signals there, which looks almost identical in the time trend and in frequency analysis as well. However, the main difference between those signals is that the signal at the bottom has a phase-coupling effect. So you can see these peaks over there in the bispectral analysis, whereas the signal at the top does not. This means that the signal at the bottom has some kind of phase-coupling effect. And we will go-- I mean, I will explain why we are looking at the phase-coupling effects there.

So here, what you can do is to apply this bispectral analysis that was invented by Choudhury and his friends. So there, you can isolate those components in your signal. And if there is a phase coupling, you should observe that kind of peaks over there or maybe mountain shape at the frequency pair analysis, this bispectral analysis. And that is also based on Fourier transform of signal. You maybe already realize that because it is still isolating the frequencies and decomposing your time series data into its frequency components there.

Why do we identify the nonlinearities? You might have asked that question while listening to this webinar. So mainly to identify the control valve problems. So here, you see there are four different types of fingerprints for control valve problems. So they have very distinct shapes in those figures.

Actually, the problem seen in figure c is the most common one, this hysteresis. And it is also known as hysteresis and the deadband, by the way. It is also known as a stiction problem, which describes the static friction occurring in the control valve. That's most common faced problems in the control valves.

So for this problem, the controller sends signal to the valve to open it. But the valve is stuck due to some static friction. And after some time, this static friction is somehow defeated because you're just pushing some control output signals to the valve. And the valve opens immediately and exceedingly. So then your valve signal shows the peak up to some point.

And then your controller realized that, OK, I need to decrease that error between the setpoint and the controller-- setpoint and the process value, and then tries to close the valve, but the static friction is actually on the stage again. So this goes like that and creates a kind of cyclic behavior. So this behavior reveals itself as a sawtooth shape in the control output, as can be seen from this industrial example again.

So if you have a stiction problem in your control valve, with high probability, you will observe actually that kind of figure in the process trends. OK, this is sawtooth shape again there, so when you look at the control output signal.

So looking at the periodogram of that signal, you will also see that there are two frequencies dominating the signal. And then comes the bispectral analysis. As you can see, there is some phase coupling, frequency coupling effects, so the peaks and this mountain shape is pretty clear there.

But if you plot this simple control output versus the process measurement, you will see an almost perfect ellipse if you have that kind of stiction problem. So just look at the perfectness of the shape. So it's not-- I mean, I should say it's not 100% ellipse, but you can approximate that kind of shape with an ellipsoid. The stiction problem's actually very clear if you remember the fingerprints of the problems that faced in the control last couple of seconds ago.

However, you might not be that much lucky every time. Looking at the process trends, one can suspect about the stiction in that figure. And that is also from an industrial example. But of course, the simple plot of control output, the control output versus process measurement gives some idea about the stiction. But it looks messy. I mean, it has this elliptical shape there. But it is not easy to draw the conclusion as the process on the left.

So however, the simple plot of those variables gives a good idea about the stiction problem. So here, one idea might be to apply a filter to isolate the exact frequencies you just obtained from the bispectral analysis and then plot your process measurement versus control output signal. So in that case, you might have a much more clear signal of stiction there.

Also, that might be a also good idea if you like to characterize your stiction problem. So simply, if you fix an ellipse to your controller output versus variable process measurement data, the x-axis will quantify how much stiction you have in your specific control valve, whereas the y-axis will quantify how much hysteresis you have.

But unfortunately, fitting an ellipse into such kind of data set, let's say, x-y data, is not a trivial task. But there are some methods available in the literature. So if you'd like to create some KPIs over the stiction analysis, there are some methods available in the literature that you can fit an ellipse to your data and see how much hysteresis or how much stiction you have in your control valve.

So since our time is limited, I will not be able to go into details of more advanced techniques to detect problems. Thus, before concluding this session, I would like to talk a little bit about the control maintenance, how we can do to control maintenance, I'm sorry.

So of course, diagnostics is an important part of control performance assessment. However, the bigger picture is always the performance assessment. And so that is the fundamental analysis you need to do, drawing some conclusions over your control system.

But there is one problem there with the control performance assessment. So in most of the cases, performance degradations is not a reason just attributed to a single control loop. So rather, it is a kind of problem of a chain of control loops where they affect each other, and some kind of problems propagate through the controllers. But you may say, OK, how then? That might be the question.

So consider you have this kind of process-- by the way, that's, again, an industrial example. There are some, let's say, heat exchangers there, a separation column, and some processes, so a reactor over there. So assume that you just calculated the performance indices of these controllers.

So it looks like there are some problems with the temperature controller TC1 and pressure controllers PC1 and PC5 over there. But you may ask, could there be a main actor in this process which affects the others? Because there are some kind of recycle streams occurring there. There are some upstream and downstream connections between the streams. And often, is actually the answer is yes. But you may say again, how we can identify that kind of problem.

So like this. Actually, I would like to show some propagations over there. This is again just from an industrial example. So first, it seems like there is a problem with TC1, this temperature controller over there. And in this particular case, it was just 0.5 degree of oscillation occurring in this temperature controller, in which you cannot easily identify by just looking at the data and identify by some frequency analysis-based methods and some time domain analysis-based methods. So it's just 0.5. You imagine it's not that much big compared to the nominal value of the temperature there.

But that problem, that oscillation, actually propagates into the next controller, TC3. And that changes actually the vapor fraction of the inlet of C1 column there. And that has a direct effect on the vapor traffic in the column and causes some variations in the top pressure controller, PC1 over there. And that causes variations in the overhead drum, this C4, and causes variations in the reflux flow rate, FC1 there.

And that causes some variations in the liquid traffic inside the column again and affects the bottom-level LC3. And LC3 is the slave controller of this FC3 over there-- vice versa, LC3 is the master controller there. FC3 is the slave controller. So a master controller affects-- propagates that oscillation through its slave controller. And you will observe that kind of problem there.

So oscillations propagate and causes some performance degradations over your system. Even though this is a quite, let's say, small unit where you have 20 of the control loops, that oscillations propagate. And if you have some, let's say, downstream units connected to exit line of those units, that oscillations might propagate through the other units and cause some follow-up problems, let's say. So there, you observe performance problems in these controllers and the connected other controllers.

So it is possible to identify this chain of propagations with different methods, such as you might utilize causality analysis or some kind of common oscillation analysis where you can utilize these periodograms or some frequency domain-based analysis. Or even you can utilize information theory to detect these propagations. These are also possible.

At the end, actually, if you utilize different algorithms, or let's say different methods, it will end up having different metrics or KPIs. So the possible metrics you can calculate are these, but actually, it's not limited to those. But there is much, much more than that. Even you can create your own KPI or metrics to calculate some kind of performance indicators of your system.

However, at the end, you will realize that some of the metrics will help you to correct problems in the field. For example, the control valve problems, whereas some will help you to correct problems, I mean, if you're sitting just in front of the computer and making some adjustments there, like the controller tunings. So it depends on actually how much effort you can put into effect there. It depends on that one.

So in the last section, I would like to talk briefly about some takeaways and remarks. But if before going into takeaways, I would like to talk briefly about where you can take advantage of MATLAB in such kind of application. So first, MATLAB has extensive library for different purposes and has well-written documentation. So there is a big community behind also where you can ask your questions or search for different examples there. I mean, let's say a vast amount of information available out there you can utilize in your case.

Second, MATLAB brings you the ease of creating a prototype before going live. So you can build your prototype application with relatively low effort and test it in different scenarios. And you don't need to ask for help from an IT expert. Of course, you can deploy your applications by using MATLAB toolboxes, for example, cogeneration, but I will not go into details of that. But that's also possible if you'd like to.

And the third one, actually MATLAB has the ability to work with streaming data. So if you have the required toolboxes there, you can directly connect to your history and fetch some data and run your application and see the KPIs that are calculated by your application there. So it has also the ability to run your application both at the edge and also in the cloud. So that has this flexibility, let's say.

And also, you can integrate your code or application with different platforms or languages. So if you already have some applications developed, you can just push in or pull out some data from them, those applications. Or you can integrate different codes that are already developed in different languages into MATLAB environments. So this kind of brings you a great flexibility to work with.

And some takeaways before concluding the presentation-- so data is the first. You should have a good database to further proceed in the assessment. The estimation of control performance is straightforward, as you saw in the previous sections. But for example, if you like to utilize minimum variance, it is simple to use. But diagnostics actually require some additional metrics to quantify the problems there. So you can also create some alarms based on these metrics and make it easier for your control engineers to draw some conclusions based on those KPIs.

But in this case actually, selection of the correct parameters for the metrics is crucial because if you select an appropriate parameter for the identification method you are utilizing, that might lead to some wrong decisions. And that might cause some problems in the resource allocation for the maintenance tasks there.

And process interactions or propagations should always be considered since, often, it is the case actually if you have some-- I mean, if you have so many poor-performing control loops, probably with a high probability, there is one control loop there causing some problems and propagating through the other controllers. And it can be identified as the root cause. So you need to consider the process interactions there.

And the final remark is that you should always keep the base layer performance as much high as possible to get the full benefit of your advanced process control layer if you have it in your system. Otherwise, you may lose 20% to 25% of your APC benefit, let's say.

So with that, actually, I would like to conclude my presentation. Of course, there is much more than the topics we have discussed today here in this session. But I tried to cover the main points in a control performance assessment application. I hope it gives you some insight about the techniques and the philosophy of control performance assessment, a journey, let's say.

So I'd like to thank you all for attending this webinar. I suppose, in the remaining part, I will try to answer the questions if there is any posted in the Q&A box there. Yeah, thank you.