predict
Compute deep learning network output for inference by using a TensorFlow Lite model
Since R2022a
Description
returns the network output Y
= predict(net
,X
)Y
during inference given the input data
X
and the network net
with a single input and a
single output.
To use this function, you must install the Deep Learning Toolbox Interface for TensorFlow Lite support package.
___ = predict(___,
provides you options to control the data type
(Name=Value
)int8
/uint8
vs single
) of inputs
and outputs when net
is a quantized model. There is an additional
option to enable or disable the execution of predict
on Windows® platform for quantized models. All these name-value arguments are ignored if
net
is not a quantized model.
Tip
For prediction with SeriesNetwork
and DAGNetwork
objects, see predict
.
Examples
Input Arguments
Output Arguments
Extended Capabilities
Version History
Introduced in R2022a
See Also
Topics
- Deploy Pose Estimation Application Using TensorFlow Lite Model (TFLite) Model on Host and Raspberry Pi
- Generate Code for TensorFlow Lite (TFLite) Model and Deploy on Raspberry Pi
- Deploy Super Resolution Application That Uses TensorFlow Lite (TFLite) Model on Host and Raspberry Pi
- Prerequisites for Deep Learning with TensorFlow Lite Models