Last edited 4 weeks ago

ST Edge AI: Guide for MPU

Applicable for STM32MP25x lines


1. Description[edit | edit source]

1.1. What is ST Edge AI?[edit | edit source]

The ST Edge AI utility provides a complete and unified Command Line Interface (CLI) to generate, from a pre-trained Neural Network (NN), an optimized model or a library for all STM32 devices including MPU, MCU, ISPU and MSC. It consists on three main commands: analyze, generate and validate. Each command can be used, regardless of the other commands using the same set of common options (model files, output directory…), or any specific options.

In the case of the STM32MPU, you can use the generate command in the ST Edge AI to convert a Neural Network (NN) model to an optimized Network Binary Graph (NBG). This NBG is the only format that allows you to run an NN model using the STM32MP2x Neural Processing Unit (NPU) acceleration.

Info white.png Information
The analyze and validate commands in the ST Edge AI for STM32MPUs are currently under development and will be available in a future release.

1.2. Main features[edit | edit source]

ST Edge AI is delivered as an archive containing an installer that can be executed to install the tool on the computer. This installer offers the possibility to select the ST Edge AI component to install. Some of these components are not available on all operating systems. In the case of the STM32MP2 component, it is available only for Linux.

Warning DB.png Important
The component STM32MP2 of ST Edge AI is available only for Linux

The tool already contains all the python environment required to run a conversion. The objective is to allow the user to convert and execute a NN model easily on the STM32MP2x platforms. For this, the tool allows the conversion of a quantized TensorFlow™ Lite[1] or ONNX™[2] model into a NBG format.

The Network binary graph (NBG) is the precompiled NN model format using the OpenVX™ graph representation. This is the only NN format that can be loaded and executed directly on the NPU of STM32MP2x boards.

The model provided to the tool must be quantized using the 8-bits per-tensor asymmetric quantization scheme to have the best performances. If the quantization scheme is 8-bits per-channel, the model mainly runs on GPU, instead of NPU.

The tool does not support the custom operators:
If the provided model contains custom operations, these are automatically removed or the generation fails.
If the output of the model is already post-processed using a custom operator such as TFLite post-process, the post-processing layer is deleted. To prevent this situation:

  • Provide a model that does not include the custom post-process layer, and code the post-process function inside your applications. The model runs on NPU, and the post-process is executed on CPU.
  • Split your model to execute the core of the model on the NPU, using the NBG model and the post-processing layer on the CPU using your TensorFlow™ Lite or ONNX™ model with stai_mpu API.

Once this NBG is generated, you can benchmark or develop your AI application. For further information, refer to the following articles presenting how to deploy your NN model and guide to benchmark your NN model.

2. Installation[edit | edit source]

Warning DB.png Important
The component STM32MP2 of ST Edge AI is available only for Linux

Download the ST Edge AI tool here: https://www.st.com/en/development-tools/stedgeai-core.html

If you need to setup specific proxy settings, please follow the step by step procedure to setup the proxy:

ST Edge AI proxy setup


Then, follow the step by step procedure to install the tool:

ST Edge AI installation



Info white.png Information
A maintenancetool is also installed, allowing to add, remove or update ST Edge AI components.

The maintenancetool is an executable file located in your installation folder. When launched, it allows to add or remove a component, update an existing component (if an update is available), and uninstall the ST Edge AI tool.

3. How to use the tool[edit | edit source]

3.1. Script utilization[edit | edit source]

The main tool interface is the stedgeai binary. First, go to the binary directory:

 cd <your_installation_path>/1.0/Utilities/linux

To print the target specific help, use:

 ./stedgeai --target stm32mp25 --help

The features of the stedgeai binary for the stm32mp25 target are the following:

usage: stedgeai --model FILE --target stm32|stellar-e|ispu|mlc [--workspace DIR] [--output DIR] [--no-report] [--no-workspace] [-h] [--version]
               [--tools-version] [--verbosity [0|1|2|3]] [--quiet]
               generate

ST Edge AI Core v1.0.0-19895 (STM32 MP2 module v1.0.0)

command:
 generate              must be the first argument (default: analyze)
                       analyze
                       	check if the model is supported and get information about the model
                       	architecture and memory footprint to know if the generated code can be
                       	deployed on the target device
                       generate
                       	generate the converted model for the target device

common options:
 --model FILE, -m FILE
                       paths of the original model files
 --target stm32|stellar-e|ispu|mlc
                       target/device selector
 --workspace DIR, -w DIR
                       workspace folder to use (default: st_ai_ws)
 --output DIR, -o DIR  folder where the generated files are saved (default: st_ai_output)

additional options:
 --no-report           do not generate the report file
 --no-workspace        do not create the workspace folder
 -h, --help            show this help message and exit (use --target to get target specific help)
 --version             print the version of the tool
 --tools-version       print the versions of the third party packages used by the tool
 --verbosity [0|1|2|3], -v [0|1|2|3], --verbose [0|1|2|3]
                       set verbosity level
 --quiet               disable the progress-bar
Warning white.png Warning
The analyze and the validate options are not yet supported for the stm32mp25 target.

3.2. Testing with examples[edit | edit source]

To generate a NBG model, you need to provide a quantized TensorFlow™ Lite or ONNX™ model, and select the correct target: stm32mp25.

Info white.png Information
The model type is not required, since the tool recognizes it automatically.

3.2.1. Generate NBG from TensorFlow™ Lite or ONNX™[edit | edit source]

To convert a TensorFlow™ Lite model to NBG:

 ./stedgeai generate -m path/to/tflite/model --target stm32mp25

To convert an ONNX™ model to NBG:

 ./stedgeai generate -m path/to/onnx/model --target stm32mp25

This command generates two files: the .nb file, which is the NBG model, and a .txt file, which is the report of the generate command.

  • The .nb file is located in the output path specified by the --output option. Else by default, the .nb file is located in the st_ai_output default directory.
  • The report (with the extension report_modelName_stm32mp25.txt) is located in the workspace folder specified by the --workspace option. Else by default, the report is located in the st_ai_ws default directory.

3.2.2. Generate NBG from TensorFlow™ Lite or ONNX™ without report[edit | edit source]

To generate a NBG model without the report, add the --no-report option in your command line.

To convert a TensorFlow™ Lite model to NBG without the report:

 ./stedgeai generate -m path/to/tflite/model --target stm32mp25 --no-report

To convert an ONNX™ model to NBG without the report:

 ./stedgeai generate -m path/to/onnx/model --target stm32mp25 --no-report

This command generates a .nb file located in the output folder. If you use the option --output to specify a directory, the NBG is located inside, else the default output folder is st_ai_output.

4. References[edit | edit source]