Field-Oriented Control of PMSM Using Position Estimated by Neural Network
This example shows how to implement field-oriented control (FOC) of a permanent magnet synchronous motor (PMSM) using rotor position estimated by an auto-regressive neural network (ARNN) trained with Deep Learning Toolbox™.
An FOC algorithm requires real-time rotor position feedback to implement speed control as well as to perform mathematical transformation on the reference stator voltages and feedback currents. Traditionally, such algorithms rely on physical sensors. However, due to increased accuracy and cost effectiveness, sensorless position estimation solutions can act as a better alternative to physical sensors.
The example provides one such sensorless solution that utilizes neural network-based artificial intelligence (AI) to estimate real-time rotor position. You can use this example to train a neural network using data generated by an existing quadrature encoder sensor-based FOC algorithm. The trained neural network acts as a virtual position sensor and estimates the rotor position.
The example guides you through the workflow to train, simulate, and implement the neural network using the following steps:
Generate data needed to train the neural network.
Extract relevant data from the generated data.
Concatenate extracted data.
Process concatenated data.
Train neural network using processed data.
Export trained neural network to Simulink® model associated with this example.
You can then simulate and deploy the Simulink model containing the trained neural network to the hardware and run a PMSM using FOC.
The following figure shows the entire workflow to implement a neural network based virtual position sensor.
Note:
This example enables you to use either the trained neural network or the quadrature encoder sensor to obtain the rotor position.
By default, the example guides you to generate the training data by simulating a model of the motor. However, if you have training data obtained from hardware running an actual motor, you can also use such a data set to train the neural network.
Model
The example includes the target model mcb_pmsm_foc_qep_deep_learning_f28379d
.
You can use this model to:
Generate the data needed to train the neural network.
Accommodate the trained neural network.
Run a PMSM using FOC in simulation or on hardware.
This model supports both simulation and code generation.
Target Model Architecture
The following figure shows the architecture of the FOC algorithm that the target model implements:
The model enables you to use either the trained neural network or the quadrature encoder sensor to obtain the rotor position.
As shown in the figure, the neural network uses the Vα, Vβ, Iα, and Iβ inputs to output θe, sinθe, and cosθe, where:
Vα and Vβ are the voltages across α and β axes respectively (in per-unit).
Iα and Iβ are the currents across α and β axes respectively (in per-unit).
θe is the motor electrical position (in per-unit).
For more details about the per-unit (PU) system, see Per-Unit System.
Therefore, you must train the neural network using the Vα, Vβ, Iα, Iβ, and θe data so that it can accurately map the inputs into the output and act like a virtual position sensor by estimating the rotor position.
Generate Training Data from Target Simulink Model
The first step to using the example is to generate the Vα, Vβ, Iα, Iβ, and θe data needed to train the neural network, for which you can use the TrainingDataCapture
utility function.
This utility function obtains the training data by simulating the target model mcb_pmsm_foc_qep_deep_learning_f28379d.slx
, which contains a quadrature encoder sensor based FOC algorithm.
The function simulates the target model and performs sweep across speed and torque reference values to capture the electrical position as well as the α- and β-equivalents of the stator voltages and currents.
What Is Sweep Operation?
Using sweep operation, the TrainingDataCapture
utility function selects a range of different speed-torque operating points, simulates the model (to run quadrature encoder-based FOC) at each operating point (or a reference speed-torque value pair) for a limited time, and records the resulting values of Vα, Vβ, Iα, Iβ, and θe after the stator voltages and currents reach a steady state.
The following figures show the constant blocks that the utility function uses to select each operating point:
The utility function uses the block
Speed_Training
(available in themcb_pmsm_foc_qep_deep_learning_f28379d/Serial Receive/SCI_Rx/Simulation
subsystem) to select a speed reference value.
The utility function uses the block
Torque_Training
(available in themcb_pmsm_foc_qep_deep_learning_f28379d/Inverter and Motor - Plant Model/Simulation/Load_Profile (Torque)
subsystem) to select a torque reference value.
After it selects a speed-torque operating point, the utility function uses the subsystem mcb_pmsm_foc_qep_deep_learning_f28379d/Current Control/Control_System/Closed Loop Control/Subsystem
to capture the Vα, Vβ, Iα, Iβ, and θe values (corresponding to this operating point) and record them in a Simulink.SimulationOutput object.
The TrainingDataCapture
utility function also manually computes and records the sinθe and cosθe values for this operating point.
Speed Ranges for Sweep Operation
The TrainingDataCapture
utility function performs the sweep operation by selecting operating points across the following two speed ranges:
Zero to low-speed – This range corresponds to 0% to 10% of the motor rated speed. In this range, the utility function does not sweep the load torque much. Because this speed range does not show much load torque variations in real-world applications, the utility function tries to keep the training data lean in this speed range by maintaining an almost constant torque reference. Therefore, the utility function primarily varies only speed references and collects only limited training data from this speed range.
Low to high-speed – This range corresponds to 10% to 100% of the motor rated speed. The utility function creates operating points by varying both reference speed and torque values in this speed range to collect majority of the required training data.
Duration Of Capture for Each Operating Point
The TrainingDataCapture
utility function simulates the target model at each speed-torque operating point. After simulation begins for an operating point, the utility function captures the Vα, Vβ, Iα, Iβ, and θe data only for a limited duration defined by these variables:
The variables dCStartTime1 and dCEndTime1 define the duration of data capture in the zero to low-speed range. Because of lower motor speeds, the duration of data capture is usually large in this range.
The variables dCStartTime2 and dCEndTime2 define the duration of data capture in the low to high-speed range. Because of higher motor speeds, the duration of data capture is usually small in this range.
This approach ensures that the utility function captures data only after the stator currents and voltages reach a steady state (after simulation begins).
You can tune these variables using this live script.
Number of Reference Speed and Torque Data Capture Points
You can also use the following variables to define the number of reference speed and torque data capture points.
dPSpdZtoL
dPSpdLtoH – This variable defines the number of reference speed points that the utility function should use in the low to high-speed range.
dPTorLtoH – This variable defines the number of reference torque points that the utility function should use in the low to high-speed range.
Note: Because the utility function uses a relatively fixed torque reference during the zero to low-speed range, the number of reference torque points is fixed in this range.
You can tune these variables using this live script.
The utility function captures, appends, and stores the data for all operating points in the Simulink.SimulationOutput object as shown in the following figure:
Run the following code to generate the training data.
Note: Running the TrainingDataCapture
utility function using these parameters can take longer time (approximately three hours).
%% Capture the training data using Simulink Environment model='mcb_pmsm_foc_qep_deep_learning_f28379d';% Minor updates in model required for obtaining training data dCStartTime1 = 3.8; %dataCaptureStartTime for low to high speed dCEndTime1 = 4; %dataCaptureEndTime for low to high speed dCStartTime2 = 3.5; %dataCaptureStartTime for zero to low speed dCEndTime2 = 4; %dataCaptureEndTime for zero to low speed dPSpdLtoH =40; %dataPointsSpeedLowtoHigh dPSpdZtoL = 100;%dataPointsSpeedZerotoLow dPTorLtoH =25; %data points torque (low(0.1pu) to high(1pu) speed) [lowtohighspeeddata,zerotolowspeeddata] = ... TrainingDataCapture(model,[dCStartTime1,dCEndTime1],[dCStartTime2,dCEndTime2],... [dPSpdZtoL,dPSpdLtoH,dPTorLtoH]);
The TrainingDataCapture
utility function stores the generated data in the following Simulink.SimulationOutput objects:
zerotolowspeeddata
lowtohighspeeddata
To access the TrainingDataCapture
utility function, click TrainingDataCapture
.
Refine and Process Data
The next step is to extract the relevant data from the data you generated in the previous section and then process it.
Extract One Electrical Cycle Data
In the previous section, for each speed-torque operating point, the utility function TrainingDataCapture
captured data for multiple electrical cycles during the time interval defined by the variables dCStartTime1, dCEndTime1, dCStartTime2, and dCEndTime2.
This section explains how you can refine the data to extract only one electrical cycle data for each operating point, which the example can use to train the neural network.
Run the following code to extract one electrical cycle information for all operating points.
oneCycleDataLtoH = OneElecCycleExtrac(lowtohighspeeddata); oneCycleDataZtoL = OneElecCycleExtrac(zerotolowspeeddata);
The utility function OneElecCycleExtrac
accepts the Simulink.SimulationOutput objects zerotolowspeeddata and lowtohighspeeddata to return the refined data, which is stored in the following two objects:
oneCycleDataZtoL – This Simulink.SimulationOutput object stores the one electrical cycle data for the zero to low-speed range.
oneCycleDataLtoH – This Simulink.SimulationOutput object stores the one electrical cycle data for the low to high-speed range.
To access the OneElecCycleExtrac
utility function, click OneElecCycleExtrac
.
Concatenate One Electrical Cycle Data
Run the following code to concatenate the extracted data by navigating from zero to low-speed and low to high-speed.
completeData = DataConcatenate(oneCycleDataLtoH, oneCycleDataZtoL);
The utility function DataConcatenate
accepts the Simulink.SimulationOutput objects oneCycleDataZtoL and oneCycleDataLtoH to return the concatenated data, which is stored in the Simulink.SimulationOutput object completeData.
To access the DataConcatenate
utility function, click DataConcatenate
.
Process Concatenated One Electrical Cycle Data
The example splits the data obtained after concatenation into following three data sets:
Training data set – This data set includes 70% of data in the object completeData. The example uses this data set to train the neural network.
Validation data set – This data set includes the remaining 15% of data in the object completeData. The example uses this data set to validate the trained neural network.
Testing data set – This data set includes the next 15% of data in the object completeData. The example uses this data set to test the trained neural network.
Run the following code to split the data in the object completeData.
tPLtoH=dPSpdLtoH*dPTorLtoH; % Total data points from speed and torque sweep in the low to high range tPZtoH=(dPSpdLtoH*dPTorLtoH)+dPSpdZtoL; % Total data points from speed and torque sweep in the zero to high range [traindatain,traindataout,testdatain,testdataout,validatedatain,validatedataout]... = DataPreparation(completeData,tPLtoH,tPZtoH)
The utility function DataPreparation
accepts the following arguments:
completeData – This Simulink.SimulationOutput object stores the output of the
DataConcatenate
utility function.tPLtoH – This variable value corresponds to dPSpdLtoH dPTorLtoH.
tPZtoH – This variable value corresponds to (dPSpdLtoH dPTorLtoH) + dPSpdZtoL.
The function returns the split data, which is stored in the following dlarray objects:
traindatain
traindataout
testdatain
testdataout
validatedatain
validatedataout
To access the DataPreparation
utility function, click DataPreparation
.
Train and Test Neural Network
The next step is to select a neural network and train it using the processed data that you generated in the previous section.
This example uses an auto-regressive neural network (ARNN). It uses the following series-parallel architecture.
In the preceding figure:
Vα, Vβ, Iα, Iβ, , and are the true values of the time-series training data inputs.
and are the true values of the time-series training data inputs with a delay of one sample time.
and are the outputs predicted by the neural network.
Therefore, the series-parallel architecture uses the Vα, Vβ, Iα, Iβ, , and inputs to predict the and outputs.
The example provides you with the utility function CreateNetwork
to train a neural network and build a non linear ARNN that can estimate rotor position. The utility function CreateNetwork
uses the Deep Learning Toolbox object dlnetwork
(Deep Learning Toolbox) and function trainnet
(Deep Learning Toolbox) to create and train a neural network.
Run the following code to train, test, and validate the neural network.
[net,testdatamse] = CreateNetwork(traindatain,traindataout,testdatain,...
testdataout,validatedatain,validatedataout)
The utility function CreateNetwork
accepts the dlarray objects traindatain, traindataout, testdatain, testdataout, validatedatain, and validatedataout
to return the trained neural network (ARNN) as well as the mean-square error information, which is stored in the following object and variable:
net – This dlnetwork object stores the trained neural network (ARNN).
testdatamse – This variable stores the mean-square error information that the utility function generated from the testing data set.
To access the CreateNetwork
utility function, click CreateNetwork
.
For more details about the training process, see Building Neural Network for Virtual Position Sensing.
Export Trained Neural Network to Target Simulink Model
After you train the neural network, the next step is to export the trained ARNN to a Simulink model.
The CreateNetwork
utility function generates a trained ARNN that contains two fully connected layers and an activation layer. The following figure shows the code snippet that defines the neural network layers.
Note: The example uses the exportNetworkToSimulink
function from Deep Learning Toolbox to export the trained ARNN network (net) to the Simulink layers blocks so that you can use the network for simulation and code generation.
Run the following command to execute the function.
exportNetworkToSimulink(net)
The function accepts the dlnetwork object net to create the Simulink layers blocks in a new Simulink model as shown in the following figure:
Copy these layers blocks and add them to the subsystem mcb_pmsm_foc_qep_deep_learning_f28379d/Current Control/Neural Network Observer/Neural Network/NN_Model_ARNN
(available in the target model mcb_pmsm_foc_qep_deep_learning_f28379d.slx
) as shown in the following figure:
The following figures show the resulting architecture of the ARNN that acts as a virtual position sensor. To achieve good performance with timeseries predictions, the example provides the previous output values (sinθe and cosθe) as additional inputs (with a delay).
Note: The model mcb_pmsm_foc_qep_deep_learning_f28379d.slx
provides you with the option to switch between neural network-based or quadrature encoder-based position sensing.
Simulate and Deploy Code
After you export the neural network to Simulink, run the following command to open the updated target Simulink model.
open_system('mcb_pmsm_foc_qep_deep_learning_f28379d.slx');
Simulate Model
Follow these steps to simulate the model.
1. Select one of these options available in the target model mcb_pmsm_foc_qep_deep_learning_f28379d.slx
:
QEP Position - Select this button to use the quadrature encoder position sensor.
NN Position - Select this button to use the neural network that you trained using this example.
2. Click Run on the Simulation tab to simulate the model.
3. Click Data Inspector on the Simulation tab to view and analyze the simulation results.
The following figure shows the comparison between the position obtained using quadrature encoder sensor and the position estimated by the trained neural network.
Required Hardware
This example supports the following hardware configuration.
LAUNCHXL-F28379D controller + BOOSTXL-DRV8305 inverter
Generate Code and Deploy Model to Target Hardware
Use the following procedure to generate code for the target model as well as deploy the generated code to the hardware.
1. Simulate the target model and observe the simulation results.
2. Complete the hardware connections. For details about hardware connections related to the LAUNCHXL-F28379D controller + BOOSTXL-DRV8305 inverter configuration, see LAUNCHXL-F28069M and LAUNCHXL-F28379D Configurations.
3. The target model automatically computes the ADC (or current) offset values. To disable this functionality (enabled by default), update the value 0 to the variable inverter.ADCOffsetCalibEnable in the model initialization script. Alternatively, you can compute the ADC offset values and update it manually in the model initialization scripts. For instructions, see Run 3-Phase AC Motors in Open-Loop Control and Calibrate ADC Offset.
4. To use the trained neural network for position estimation, ensure that you select NN Position in the target model.
5. To use the quadrature encoder sensor, select QEP Position in the target model. In addition, compute the encoder index offset value and update it in the model initialization script associated with the target model. For instructions, see Quadrature Encoder Offset Calibration for PMSM.
6. Load a sample program to CPU2 of LAUNCHXL-F28379D, for example, program that operates the CPU2 blue LED by using GPIO31 (c28379D_cpu2_blink.slx
), to ensure that CPU2 is not mistakenly configured to use the board peripherals intended for CPU1. For more information about the sample program or model, see the Task 2 - Create, Configure and Run the Model for TI Delfino F28379D LaunchPad (Dual Core) section in Getting Started with Texas Instruments C2000 Microcontroller Blockset (C2000 Microcontroller Blockset).
7. Click Build, Deploy & Start on the Hardware tab to deploy the target model to the hardware.
8. Click the host model hyperlink in the target model to open the associated host model. For details about the serial communication between the host and target models, see Host-Target Communication.
9. In the model initialization script associated with the target model, specify the communication port using the variable target.comport. The example uses this variable to update the Port parameter of the Host Serial Setup, Host Serial Receive, and Host Serial Transmit blocks available in the host model.
10. Update the Reference Speed (RPM) value in the host model.
11. Click Run on the Simulation tab to run the host model.
12. Change the position of the Start / Stop Motor switch to On, to start running the motor.
13. Use the display and scope blocks available on the host model to monitor the debug signals.
See Also
analyzeNetwork
(Deep Learning Toolbox) | fullyConnectedLayer
(Deep Learning Toolbox) | tanhLayer
(Deep Learning Toolbox) | dlnetwork
(Deep Learning Toolbox) | trainnet
(Deep Learning Toolbox)
Related Examples
- Field-Oriented Control of PMSM Using Position Estimated by Neural Network on STM32 Processor Based Boards (Embedded Coder)
More About
- Building Neural Network for Virtual Position Sensing
- List of Deep Learning Layers (Deep Learning Toolbox)