# Physics-informed NN for parameter identification

248 次查看（过去 30 天）

显示 更早的评论

Dear all,

I am trying to use the physics-informed neural network (PINN) for an inverse parameter identification for any ODE or PDE.

I followed the tutorial https://uk.mathworks.com/help/deeplearning/ug/solve-partial-differential-equations-with-lbfgs-method-and-deep-learning.html provided in the help center.

I am wondering does PINN could extract the identified parameters (coefficients in the PDE). Unfortunately, I do not know how to convert the identified parameters in NN to the real parameters.

Thanks in advance.

##### 0 个评论

### 回答（4 个）

Ben
2022-8-24

Hi Dawei,

The PINN in that example is assuming the PDE has fixed coefficients. To follow the method of Raissi et al. you can consider a parameterized class of PDEs, e.g. for the Burger's equation you can consider:

The method is then to simply minimize the loss with respect to both the neural network learnable parameters, and the coefficients .

To adapt the example you can extend the parameters in the Define Deep Learning Model section:

parameters.lambda = dlarray(0);

parameters.mu = dlarray(-6);

Next you will need to modify the modelLoss function to replace the line f = Ut + U.*Ux - (0.01./pi).*Uxx with the following:

lambda = parameters.lambda;

mu = exp(parameters.mu);

f = Ut + lambda.*U.*Ux - mu.*Uxx;

Finally you will have to fix the computation for numLayers in the model function, as adding lambda and mu to parameters invalidated it. I simply did the following:

numLayers = (numel(fieldnames(parameters))-2)/2;

This will make the example similar to the author's code. I didn't get very good results for coefficient identification when I tried this, this is possibly due to differences in the options between fmincon and the author's use of ScipyOptimizerInterface. I'm trying that out currently, but hopefully this much will help you get started.

##### 14 个评论

Rohit
2023-7-27

编辑：Rohit
2023-7-28

Hello @CSCh @Ben @James Gross, I am stuck in the same position of solving an inverse problem code using PINN. Is it possible to share the latest full code for this? I believe that there are some nomenclature changes in the codes with ADAM and L-BFGS in the help center, which is making a bit confusing to understand. Specifcically, I wanted to know how to add the unknown paramters of the differential equation to the trainable paramters of the NN?

Thanks for your help.

Rohit

##### 2 个评论

Ben
2023-7-28

@Rohit I believe it should be possible to modfify the new version of the example to solve the inverse problem with lbfgsupdate by:

- Creating a struct of parameters with parameters.net = net and parameters.lambda = dlarray(0) and parameters.mu = dlarray(-6) as above.
- Implementing modelLoss to take parameters in place of net, making similar modifications to above to use parameters.lambda and parameters.mu to specify the PDE loss term, and computing the necessary gradients for parameters.lambda and parameters.mu using dlgradient.
- Passing parameters in place of net to the lbfgsupdate function in the training loop.

This should be the most natural way to add the extra unknown coefficients and learn them via lbfgsupdate, rather than adding these values to the dlnetwork itself.

The workflow using adamupdate should be similar to before.

Hope that helps.

Joshua Prince
2023-8-1

binlong liu
2023-8-2

Hello @Ben@James Gross, I want to use the PINN to solve PDEs by using two neural networks (net_1 with input of x,t and output of c_w; net_2 with inputs of x,t and r and output of c_intra) but with one loss function (loss_total = loss_PDE+lossBCIC+loss_OB). I tried to use Adam to update the hyperparameters of net_1 and net_2 and it works but the errors are not small enough and I want to try the L-BFGS method, but I have no idea how to do it in matlab. The following code shows the way how I did it in matlab, but it didn't work. Could you help me to solve this problem.

Thanks for your help!

[loss_total,loss_PDE,loss_BCIC,loss_OB,t_BTC,c_w_BTC_Pred,gradients_1,gradients_2]= dlfeval(@modelLoss,net_1,net_2,t_w,x_w,r_w,t_intra,x_intra,r_intra,...

tBC1_w,tBC2_w,xBC1_w,xBC2_w,cBC1_w,tBC1_intra,tBC2_intra,xBC1_intra,...

xBC2_intra,rBC1_intra,rBC2_intra,tIC_w,xIC_w,cIC_w,tIC_intra,xIC_intra,rIC_intra,cIC_intra);

% Initialize the TrainingProgressMonitor object. Because the timer starts

% when you create the monitor object, make sure that you create the object

% close to the training loop.

monitor = trainingProgressMonitor( ...

Metrics="TrainingLoss", ...

Info="Epoch", ...

XLabel="Epoch");

% Train the network using a custom training loop. Use the full data set at

% each iteration. Update the network learnable parameters and solver state using the lbfgsupdate function. At the end of each iteration, update the training progress monitor.

for i = 1:numEpochs

[net_1, solverState] = lbfgsupdate(net_1,[loss_total,gradients_1],solverState);

[net_2, solverState] = lbfgsupdate(net_2,[loss_total,gradients_2],solverState);

updateInfo(monitor,Epoch=i);

recordMetrics(monitor,i,TrainingLoss=solverState.Loss);

end

##### 1 个评论

Ben
2023-8-2

@binlong liu - you appear to be using lbfgsupdate incorrectly. The 2nd input should be the loss function as a function_handle, see this part of the doc.

Note that you don't need two lbfgsupdate calls, you can put the two networks together in a cell-array or struct as described here. Actually this is likely to be necessary for lbfgsupdate since it calls the loss function for you, and you likely need both networks to do this, and to get the correct gradients.

### 另请参阅

### 类别

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!