This example shows how to build a smile detector by using the OpenCV Importer. The detector estimates the intensity of the smile on a face image or a video. Based on the estimated intensity, the detector identifies an appropriate emoji from its database, and then places the emoji on the smiling face.
First import an OpenCV function into Simulink® by using the Install and Use Computer Vision Toolbox Interface for OpenCV in Simulink. The wizard creates a Simulink library that contains a subsystem and a C Caller block for the specified OpenCV function. The subsystem is then used in a preconfigured Simulink model to accept the facial image or a video for smile detection. You can generate C++ code from the model, and then deploy the code on your target hardware.
You learn how to:
Import an OpenCV function into a Simulink library.
Use blocks from a generated library in a Simulink model.
Generate C++ code from a Simulink model.
Deploy the model on the Raspberry Pi hardware.
To build the OpenCV libraries, identify a compatible C++ compiler for your operating system, as described in Portable C Code Generation for Functions That Use OpenCV Library. Configure the identified compiler by using the mex -setup c++ command. For more information, see Choose a C++ Compiler.
In this example, a smile detector is implemented by using the Simulink model
In this model, the
subsystem_slwrap_detectAndDraw subsystem resides in the
Smile_Detect_Lib library. You create the
subsystem_slwrap_detectAndDraw subsystem by using the OpenCV Importer. The subsystem accepts a face image or a video and provides these output values.
The MATLAB® Function block accepts input from the
subsystem_slwrap_detectAndDraw subsystem block. The MATLAB Function block has a set of emoji images. The smile intensity of the emoji in these images ranges from low to high. From the emoji images, the block identifies the most appropriate emoji for the estimated intensity and places it on the face image. The output is then provided to the Detected Face and Smiley Replacement Video Viewer blocks.
To access the path to the example folder, at the MATLAB command line, enter:
Each subfolder contains all the supporting files required to run the example.
Before proceeding with these steps, ensure that you copy the example folder to a writable folder location and change your current working folder to
...example\SmileDetector. All your output files are saved to this folder.
1. To start the OpenCV Importer app, click Apps on the MATLAB Toolstrip. In the Welcome page, specify the Project name as
Smile_Detector. Make sure that the project name does not contain any spaces. Click Next.
2. In Specify OpenCV Library, specify these file locations, and then click Next.
Project root folder : Specify the path of your example folder. This path is the path to the writable project folder where you have saved your example files. All your output files are saved to this folder.
Source files : Specify the path of the
.cpp file located inside your project folder as
Include files : Specify the path of the
.hpp header file located inside your project folder as
3. Analyze your library to find the functions and types for import. Once the analysis is complete, click Next. Select the
detectAndDraw function and click Next.
4. From What to import, select the
I/O Type for
Input, and then click Next.
5. In Create Simulink Library, verify the default values and click Next.
A Simulink library
Smile_Detector_Lib is created from your OpenCV code into the project root folder. The library contains a subsystem and a C Caller block. You can use any of these blocks for model simulation. In this example, the subsystem
subsystem_slwrap_detectAndDraw is used.
To use the generated subsystem
subsystem_slwrap_detectAndDraw with the Simulink model
1. In your MATLAB Current Folder, right-click the model
smileDetect.slx and click
Open from the context menu. In the model, delete the existing
subsystem_slwrap_detectAndDraw subsystem and drag the generated subsystem
subsystem_slwrap_detectAndDraw from the
Smile_Detector_Lib library to the model. Connect the subsystem to the MATLAB Function block.
2. Double-click the subsystem and specify these parameter values.
3. Click Apply, and then click OK.
On the Simulink Toolstrip, in the Simulation tab, click on simulate the model button. After the simulation is complete, the Video Viewer blocks display the detected face and an emoji replacement on the face tabs. The emoji represents the intensity of the smile.
Before you generate the code from the model, you must first ensure that you have write permission in your current folder.
To generate C++ code:
1. Open the
smileDetect_codegen.slx model from your MATLAB Current Folder.
2. on the Apps tab on the Simulink toolstrip, select Embedded Coder. On the C++ Code tab in the Settings drop-down list, click
C/C++ Code generation settings to open the Configuration Parameters dialog box and verify these settings:
In the Code Generation pane, under Target selection, Language is set to C++.
In the Interface under Code Generation, Array layout in the Data exchange interface category is set to
3. Connect the generated subsystem
subsystem_slwrap_detectAndDraw to the MATLAB Function block.
4. To generate C++ code, under the C++ Code tab, click the
Generate Code drop-down list, and then click Build. After the model finishes generating code, the Code Generation Report opens. You can inspect the generated code. The build process creates a zip file called
smileDetect_with_ToOpenCV.zip in your current MATLAB working folder.
Before you deploy the model, connect the Raspberry Pi to your computer. Wait until the PWR LED on the hardware starts blinking.
In the Settings drop-down list, click
Hardware Implementation to open the Configuration Parameters dialog box and verify these settings:
Set the Hardware board to
Raspberry Pi. The Device Vendor is set to
In the Code Generation pane, under Target selection, Language is set to C++. Under Build process, Zip file name is set to
smileDetect_with_ToOpenCV.zip. Under Toolchain settings, the Toolchain is specified as
GNU GCC Raspberry Pi.
To deploy the code to your Raspberry Pi hardware:
1. From the generated zip file, copy these files to your Raspberry Pi hardware.
2. In Raspberry Pi, go to the location where you saved the files. To generate an
elf file, enter this command:
make -f smileDetect.mk
3. Run the executable on Raspberry Pi. After successful execution, you see the output on Raspberry Pi with an emoji placed on the face image.