Accelerate AI Based Software Development on Infineon AURIX TC4x Microcontroller
This example shows you how to implement AI based motor control functions using the Model-Based Design approach by deploying a multi-layer perceptron (MLP) neural network on the Infineon® AURIX™ TC4x microcontroller.
The parallel processing unit (PPU) core of the Infineon AURIX TC4x microcontroller is a specialized processing unit designed to speed up complex computations by using the code replacement library (CRL) technique for hardware-specific code generation. This example uses PPU core to implement an MLP network to estimate the rotor position and TriCore 0 core to implement the sensor-based Field-Oriented Control (FOC) (Motor Control Blockset) algorithm. The TriCore 0 also estimates the error between the actual rotor position calculated from an encoder sensor and the predicted rotor position calculated from the trained MLP network, which acts as a virtual sensor.
This example includes predefined classes and user-defined functions, which you can use to configure the neural network architecture, extract training data, export the trained neural network to Simulink, and perform simulations. You can either simulate or deploy the extracted Simulink® model with the trained neural network to predict the rotor position.
Prerequisites
Complete the following examples and tutorials:
Getting Started with PPU Acceleration for Infineon AURIX TC4x Microcontrollers
Analyze Sensorless Observers for Field-Oriented Control Using Multiple Cores of Infineon AURIX
Get Started with Deep Learning Toolbox (Deep Learning Toolbox)
Perceptron Neural Networks (Deep Learning Toolbox)
Neuron Model (Deep Learning Toolbox)
Required Hardware
Infineon AURIX™ TC4x-TriBoards
AURIX™ TC3xx Motor Control Power Board
Nanotec DB42S02 Electric motor
WEDL5541-B14-KIT (5 mm) Incremental Encoder
Hardware Connection
Connect the hardware as shown in this figure:
Available Models
The example includes these models:
The example includes these folders:
a. The classes
folder contains user-defined class files, which you can use to configure the neural network parameters, network architecture, simulink model information, solver training options, and data collection settings.
The
mlpOnPPUExample
file comprises of all the tunable MLP architecture, network parameters and example model settings. You can use the file to start this example on PPU core.Analyze
MLPArchitecture
file to understand the network architecture parameter settings such as time window, hidden layer structure, MLP layers, loss function, solver training options, feature, and target scalings options.Use
MLPNetworkParameters
file to analyze network parameters such as weights and biases of MLP network layers.The
ModelsInformation
file stores model information, sets up the log signals, and creates the MLP network Simulink model.The
DataCollection
file helps in data collection, feature, and targets scaling.The
CollectedLogData
file maintains the relationship between the collected log data and different time stamps necessary for using the correct data for training and testing the MLP Network.The
HiddenLayersStructure
file creates the hidden layers of the MLP network.
b. Use the codeAndCache
folder to store the training data, prediction data, software executables, and generated code.
c. The data
folder stores the custom training data, log data, and trained network information.
d. The scripts
folder contains the MATLAB® scripts supporting the available models.
You can use the classes, functions, data and scripts to simulate the available models for different scenarios. Set the simulationMode
parameter of predefined mlpEg
structure as:
dataCollection
- To collect data in external modesimulationMLP
- To simulate the exported MLP networksimulationSOC
- To simulate the top-level model in a multicore workflowppuSubsystemPIL
- To perform PIL simulationsprediction
- To predict the rotor position using the virtual sensor (trained MLP)
Note: Close the Simulink® models before setting the simulationMode
parameter of mlpEg
.
Run the following code to set the predefined parameters. This code creates the mlpEg
variable, which stores all the required settings.
clear; bdclose all; clear mlpOnPPUExample; clc; tc4x_soc_ai_setup_script;
Step 1: Describe Neural Network Architecture
The following figure shows the FOC algorithm architecture, where:
is the stator voltage along the -axis of the reference frame
is the stator voltage along the -axis of the reference frame
is the stator current along the -axis of the reference frame
is the stator current along the -axis of the reference frame
is the electrical position of the motor
The neural network must predict () for unknown inputs () by using a known set of inputs () and outputs (). This example uses an MLP neural network as it is a good choice for such function estimation and regression problems. You must train the MLP neural network using data collected from the target hardware for some known values of () and corresponding ().
This example uses a preconfigured fully connected MLP network with ReLU activation function for all the hidden layers.
Note: Analyze mlpOnPPUExample.m
file in classes
folder shipped with this example to change the network settings.
Step 2: Collect Data for Training
The example needs and data to train the neural network such that it acts like a virtual position sensor and estimates the rotor position using artificial intelligence.
This example includes the training data extracted from the tc4x_soc_mlp_foc_top
model by running the referenced tc4x_soc_mlp_foc_tricore0
model in external mode. You can use this training data shipped with this example to train the MLP network.
Run the following code to view the shipped training data file properties.
%%disp(mlpEg.collectedLogData); % Uncomment to view the properties of training data file shipped with this example.
Follow these steps to collect custom training data by running the model in external mode and logging the data using simulation data inspector.
1. Run the following command to set the simulation mode.
mlpEg.simulationMode = 'dataCollection';
This command also sets the cache folder path to codeAndCache/DATACOL/Cache/
and code generations folder path to codeAndCache/DATACOL/Code/
, which are used during model build step.
2. Complete the hardware connections.
3. Open tc4x_soc_mlp_foc_tricore0
model.
4. Press Ctrl+E or select Modeling > Model Settings to open Configuration Parameters window.
Set the Connectivity interface parameter to
Serial (ASCLIN0)
by navigating to Hardware Implementation > Target hardware resources > Connectivity.Specify the Port parameter for the external mode of simulation and click OK. To see the list of available COM ports on your computer, select Start > Control Panel > Device Manager > Ports (COM &LPT).
5. Open tc4x_soc_mlp_foc_top
model.
6. On the Hardware tab, click Configure, Monitor & Tune to configure the model for the external mode of simulation. The SoC Builder tool opens and guides you through the simulation steps.
7. In the Select Project Folder window, select the project folder and click Next. You can use data/customData
to maintain a clean work folder.
8. In the Review Hardware Mapping window, click View/Edit to review or click Next to continue without the Hardware Mapping review.
9. In the Select External Mode on CPU window, select the CPU for external mode as TriCore0
.
10. In the External Mode Connectivity window, verify the external connection details with the values you configured in Step 2.4 and click Next.
11. Click Validate in the Validate Model window to check the compatibility of the model against the selected hardware board. After successful validation of the model, click Next.
12. Click Build in the Build Model window to generate a compiled software executable for the model. Once the model successfully builds, click Next.
13. Click Load and Run in the Run Application window to run the model in external mode. This step opens a software interface model for the TriCore 0 referenced model.
14. Click Data Inspector on the Simulation tab of the software interface model.
Select the signals
Theta_e
(),V_alpha
(),V_beta
(),I_alpha
(), andI_beta
() to export.
Right-click the run and select Export to export data to a MAT file. You can save this exported data in
data/customData/
folder location to maintain a clean work folder. For more information, see Save and Share Simulation Data Inspector Data and Views, Log or Stream Real-Time Signals by Using the Simulation Data Inspector (Simulink Real-Time), and Generate Code and Deploy Using SoC Builder.
15. Setup collectedLogData
object to extract MLP training data by running this code.
logFileName = 'internal'; % Path to the extracted data for custom logged data such as data/customData trainingAndTestingStartTimeStamp = 1; % Timestamp of start of the useful data trainingAndTestingEndTimeStamp = 30; % Timestamp of end of the useful data simulationStopTime = 30; % Stop time of the simulation mlpEg.collectedLogData = CollectedLogData(trainingAndTestingStartTimeStamp, ... trainingAndTestingEndTimeStamp,simulationStopTime, logFileName); %%mlpEg.plotRawInputs(); % Uncomment to plot raw data collected for training %%mlpEg.plotMLPTrainingAndTestingData(); % Uncomment to plot training and testing data %%mlpEg.plotMLPTimeWindowedTrainingData(7,7.05); % Uncomment to plot time window for trainingand testing data
Step 3: Train MLP Neural Network
Run the following code to train the neural network.
mlpEg.trainedNet = trainnet(mlpEg.trainingFeatures, mlpEg.trainingTargets, mlpEg.mlpLayers, mlpEg.lossFunction, mlpEg.solverTrainingOptions);
%%mlpEg.testTrainedMLPNetwork(); % Use the predict API from the Deep Learning Toolbox to predict (${sin\theta_e, cos\theta_e}$) of the testing dataset.
Iteration TimeElapsed TrainingLoss TrainingRMSE GradientNorm StepNorm _________ ___________ ____________ ____________ ____________ ________ 1 00:00:04 0.1954 0.62531 0.1851 0.65053 50 00:00:52 0.0011363 0.047797 0.0014675 0.052494 100 00:01:42 0.00024044 0.02199 0.00080357 0.0074304 Training stopped: Max iterations completed
For more information on neural network training options, see trainnet
(Deep Learning Toolbox) and Train Deep Neural Networks (Deep Learning Toolbox) tutorials.
Step 4: Export Trained MLP to Simulink Model
Run the following command to export the trained MLP neural network to Simulink®:
mlpEg.MLPNetworkToSimulinkModelCreator();
After you export the MLP to Simulink, run the following command to open the updated target Simulink model.
open_system('tc4x_flat_mlp_sim_ppu.slx');
Step 5: Simulate Top-Level Model
To analyze the performance of the trained MLP, simulate the top-level model and observe the performance of the trained MLP as a virtual sensor.
1. Run the following command to set the simulation mode.
mlpEg.simulationMode = 'simulationSOC';
2. Open tc4x_soc_mlp_foc_top
model.
3. Click Run on the Simulation tab to simulate the model.
4. Click Data Inspector on the Simulation tab to view and compare these parameters:
Electrical position of the motor obtained from encoder sensor (
(Theta_e)
) versus predicted electrical position of the motor obtained from trained MLP networkMLP_THETA_eSignal
Reference speed (
Speed_ref
) and feedback speed (Speed_fbk
) versus speed predicted by the MLP network acting as virtual sensor (MLP_SpeedRPMSignal
).
Error between reference speed and speed predicted by the MLP network (
Error_Ref_MLPSignal
) versus feedback speed and speed predicted by the MLP network (Error_Fbk_MLPSignal
)
Observe that Theta_e
approximately matches MLP_THETA_eSignal
. Similarly MLP_SpeedRPMSignal
approximately matches Speed_ref
and Speed_fbk
. Error_Ref_MLPSignal
approximately matches Error_Fbk_MLPSignal
.
Step 6: PIL Simulation of PPU Core Based MLP
Verify the hardware-specific code generation of the PPU core and the performance of the trained MLP network on the hardware by performing PIL simulation on the MLP network block, which is obtained in Step 4.
1. Run the following code to set the mode of simulation.
mlpEg.simulationMode = 'ppuSubsystemPIL';
2. Open tc4x_flat_mlp_pil_ppu
model, which is created using the MLP network block from Step 3.
3. Perform PIL simulation by following the steps in Code Verification and Validation with PIL Using PPU and check the code execution and profiling report.
4. Observe the generated code with code replacement libraries (CRL). For more information, see Configure and Run PIL Simulation.
Step 7: Generate Code and Deploy Model to Target Hardware
Deploy the trained MLP network on the target hardware and observe the performance of the virtual sensor.
1. Run the following command to set the simulation mode to prediction.
mlpEg.simulationMode = 'prediction';
2. Complete the hardware connections.
3. Open tc4x_soc_mlp_foc_top
model, by running this command:
open_system('tc4x_soc_mlp_foc_top.slx');
The TriCore 0 collects the necessary inputs and sends them to the PPU using the Interprocess Data Channel block. The PPU uses the trained MLP network to predict the rotor position and sends the data back using the Interprocess Data Channel block. The TriCore 0 referenced model uses this predicted rotor position to calculate predicted speed.
4. Follow Step 2.4 to Step 2.12 to simulate the model in external mode. Use codeAndCache/PRDCT/SocPrj/
in Select Project Folder window to maintain a clean work folder.
5. Click Data Inspector on the Simulation tab of the software interface model to view and compare the (Theta_e)
, MLP_THETA_eSignal
, Speed_ref
, Speed_fbk
, MLP_SpeedRPMSignal
, Error_Ref_MLPSignal
, and Error_Fbk_MLPSignal
parameters.
Observe that the position estimated by the trained MLP network (MLP_THETA_eSignal
) approximately matches the position estimated by the encoder sensor ((Theta_e)
), which verifies the performance of the MLP as a virtual sensor. Also, observe that the speed predicted by the MLP network (MLP_SpeedRPMSignal
) approximately matches the reference speed (Speed_ref
) and feedback speed (Speed_fbk
). The error between reference speed and speed predicted by the MLP network (Error_Ref_MLPSignal
) approximately matches error between feedback speed and speed predicted by the MLP network (Error_Fbk_MLPSignal
).
Other Things to Try
Modify MLP Architecture: You can change the network architecture and its training and testing options by running this code in the Command Window.
numberOfHiddenLayers = 6; % Number of hidden layes in the MLP hiddenLayersNumNodesArray = [36 36 36 36 36 36]; % Number of nuerons in each hidden layer mlpEg.hiddenLayersStructure = HiddenLayersStructure(numberOfHiddenLayers, hiddenLayersNumNodesArray); % Sets the MLP architecture %%disp(mlpEg.mlpLayers); % Uncomment to display MLP layers mlpEg.portionOfSteadyStateDataForTraining = 0.8; % Portion of the collected data to be used for training (the remaining data will be used for testing). mlpEg.featureScalingMethod = 'zScore'; % 'minMax', 'zscore', or 'none' mlpEg.targetScalingMethod = 'thetaTransform'; % 'thetaTransform', 'targetScaling, targetScalingAndThetaTransform', or 'none' mlpEg.timeWindow = 9; % Past features to be fed along with current features to the MLP during training mlpEg.lossFunction = 'huber'; % Training option loss function mlpEg.solverTrainingOptions.MaxIterations = 80; mlpEg.solverTrainingOptions.Verbose = false; mlpEg.solverTrainingOptions.Plots = 'none'; %%disp(mlpEg.solverTrainingOptions); % Uncomment to displays chosen training options
For more information on training options, see Train Deep Neural Networks (Deep Learning Toolbox).
Analyze mlpOnPPUExample.m
file in classes
folder shipped with this example to change the network settings.
Import pretrained networks from external deep learning platforms: You can train the neural network using third-party tools, but the neural network must be MLP with fully connected layers and ReLu activation function. See Pretrained Networks from External Platforms (Deep Learning Toolbox) to import such trained networks to Simulink®.