In above script,
- Instructions will be generated for specified network(net) and specified bitstream('zcu102_single') using hW.compile.
- Those instructions will be deployed on external DDR and DL IP will be configured in hW.deploy stage.
- hW.predict command provides input image for network execution on FPGA and captures output data along with fps information.
For second question, After hW.deploy command, if you would like to feed input from another source/way , should implement system integration interfaces as specified in this documnetation page: https://www.mathworks.com/help/deep-learning-hdl/ug/interface-with-the-deep-learning-processor-ip-core.html.
Here is another example on system integration: https://www.mathworks.com/help/deep-learning-hdl/ug/deploy-and-verify-yolo-v2-vehicle-detector-on-fpga.html