Video length is 5:18

Estimating Parameters (Mixed Effects) | SimBiology Tutorials for QSP, PBPK, and PK/PD Modeling and Analysis

From the series: SimBiology Tutorials for QSP, PBPK, and PK/PD Modeling and Analysis

Estimating Parameters (Mixed Effects) video: This video demonstrates how to perform non-linear mixed effects parameter estimation on population data. The results from the estimation are visualized and two estimations with different covariate models are compared quantitatively using statistics such as the Akaike and Bayesian Information Criteria.

Published: 13 Sep 2020

In this video, we will be performing non-linear mixed effects parameter estimation on a data set. The data set is the same theophylline data set that we imported in a separate tutorial video on data import and non-compartmental analysis. In a previous video, we also performed non-linear regression or fixed effects parameter estimation. And so we can see the results of that estimation here.

What you can see is that for this one compartment model where we pooled all the data together to get a single set of parameter estimates that should fit all of the data, that there are a few discrepancies between the simulation and the actual data. Now, by accounting also for the random effects which should account for the interindividual variability between the individuals in the data set, we should be able to get a better fit.

The way that we do that is by adding a program-- again, a fit data program using the same data set and one-compartment model as before. But now, instead of nonlinear regression, we will use mixed effects. If you want to use stochastic solver, you can also choose mixed effects using stochastic solver. The setup is exactly the same.

So we need to map our concentration to the central drug species and the dose column to the species Dose_Central. We're estimating the same parameters. But now you can see that each parameter has a fixed effect, which is theta and a random effect, which is eta. And for these we should fill in some initial estimates. These initial estimates you can take from the fixed effects results. After all, the thetas are the fixed effects.

So what you can do is you can take the constant error results, take the log value of that, and put those in for the thetas. I recommend not putting in the exact values because you might run into some numerical issues. So that's what I'm going to do now.

So I've filled in the transformed values for theta1, 2, and 3. So it's minus 0.7, minus three, and 0.4. These are approximate values for the fixed effects. Again, I can choose an error model. And there are some options you can set, like the covariance pattern or you can also choose the type of optimization. So with that, everything is set and we can start the estimation.

As the estimation progresses, you can see that the LogLikelihood increases. We want to maximize the LogLikelihood, which is the negative log value of the objective function. Similar to what we saw in the fixed effects example. Similarly, you see how the fixed effects estimates for theta1, theta2, and theta3 progress over time. As well as those for the random effects, which are psi1_1, psi2_2, and psi3_3.

Once the parameter estimation finishes, we can have a look at the diagnostic plots. Now we can see the individual fits. The simulations fit very well with the experimental data. And that is reflected by the predicted versus observed plot where you want the blue dots to be closer to the diagonal than the red dots. The red dots is the fixed effects only, and the blue delta is taking into account fixed and random effects.

Same for the residuals versus time. Again, you want the red dots to lie closer than the blue dots to the center line. You can also look at the residual distributions. You want those residual distributions to be approximately normally distributed. Lastly, we can have a look at box plot of the random effects where you want the random effects to be, again, sort of normally distributed. And it looks like for this particular case, we get a pretty good fit.

So we can save this data as mixed_constant, the constant error model. One other thing that we can do is we can add covariates to this statistical model. For example, we can add the covariate weight. And for example, we can center that by the mean. We can normalize it by the mean. And we can use this, for example, to add a covariate to the central volume.

To investigate whether there is a significant effect of this covariate onto the central volume, we add an extra fixed effect, call it theta4, that we multiply by this transformed covariate. So it's tWT. And then we add the eta1. And then we can give a separate initial condition for the theta4. In this case, because we normalize everything, we can assume that this is 0. So and then we can perform the parameter estimation again.

Once the optimization is finished, we can again have a look at the diagnostic plots. We can see that we have quite a similar optimization result. So it may not be worth adding this particular covariate to our statistical model. Similar to the fixed effects case where we compared the results from the constant and proportional error model, we can do the same here.

We can create a new data sheet and add the results from each of these mixed effects estimations, and in particular look at the difference between the statistics. You can see here that there is very little difference between the two models, whether you use the weight as a covariate in the central volume or not. And so in that case, it's generally better to just use the least complex model. And that's also consistent with maximizing the LogLikelihood here and minimizing the AIC and BIC.

That concludes this video in which we perform non-linear mixed effects.