STM32MP1 artificial intelligence expansion packages

Revision as of 10:37, 19 June 2019 by Registered User

Template:ArticleMainWriter Template:ArticleFirstDraftVersion


SUMMARY
The Artificial Intelligence expansion package contains AI frameworks to enable AI application example that could be run on STM32MP1 hardware.

This package consists in a OpenEmbedded meta layer meta-st-stm32mpu-ai to be added on top of the STM32MP1 Distribution Package. It brings a complete and coherent easy to build / install environment to take advantage of AI on STM32MP1 hardware.

1. Installation of the meta layer[edit source]

  • Clone following git repositories into <Distribution Package installation directory>/layers/meta-st
 cd <Distribution Package installation directory>/layers/meta-st
 git clone https://gerrit.st.com/stm32mpuapp/meta/meta-st-stm32mpu-ai.git -b thud
  • Setup the build environment
 cd ../..
 DISTRO=openstlinux-weston MACHINE=stm32mp1 source layers/meta-st/scripts/envsetup.sh
  • Add the new layers
 bitbake-layers add-layer ../layers/meta-st/meta-st-stm32mpu-ai

2. Build the software image[edit source]

  • For the AI Computer Vision software X-LINUX-AI-CV expansion package
 bitbake st-image-ai-cv

3. The AI demo launcher[edit source]

4. AI application examples[edit source]

4.1. Python TensorFlowLite applications[edit source]

This part provide python applications example based on TensorflowLite and OpenCV.
The applications integrate camera preview and test data picture that is then connected to the chosen TensorFlowLite model.

4.1.1. Image classification[edit source]

4.1.1.1. Description[edit source]

An image classification Neural Network model will allow you to identify what an image represents. It classify an image into various classes.

The label_tfl_multiprocessing.py python script is a multi-process python application for image classification.
The application enable OpenCV camera streaming (or test data pictures) and TensorFlowLite interpreter that run the NN inference based on the camera (or test data pictures) inputs.
The user interface is done thanks to python GTK.

4.1.1.2. How to use it[edit source]

The python scripts label_tfl_multiprocessing.py accepts different input parameters:

-i, --image          image directory with images to be classified
-v, --video_device   video device (default /dev/video0)
--frame_width        width of the camera frame (default is 640)
--frame_height"      height of the camera frame (default is 480)
--framerate          framerate of the camera (default is 30fps)
-m, --model_file     tflite model to be executed
-l, --label_file     name of file containing labels
--input_mean         input mean
--input_std          input standard deviation

To ease the launch of the python script, two shell scripts are available:

  • launch image classification based on camera frame inputs
 /usr/local/demo-ai/python/launch_python_label_tfl_mobilenet.sh
  • launch image classification based on the picture located in /usr/local/demo-ai/models/mobilenet/testdata directory
 /usr/local/demo-ai/python/launch_python_label_tfl_mobilenet_testdata.sh


4.1.1.3. Mobilenet V1[edit source]
4.1.1.3.1. Default model is Mobilenet V1 0.5 128 quant[edit source]

The default model use for tests is the mobilenet\_v1\_0.5\_128\_quant.tflite downloaded from https://www.tensorflow.org/lite/guide/hosted\_models.

4.1.1.3.2. Testing another Mobilenet V1 model[edit source]

You can test other models by downloading it directly on the STM32MP1 board. As example:

 curl http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_1.0_224_quant.tgz | tar xzv -C /usr/local/demo-ai/models/mobilenet/
 python3 /usr/local/demo-ai/python/label_tfl_multiprocessing.py -m /usr/local/demo-ai/models/mobilenet/mobilenet_v1_1.0_224_quant.tflite -l /usr/local/demo-ai/models/mobilenet/labels.txt -i /usr/local/demo-ai/models/mobilenet/testdata/

4.1.2. Object detection[edit source]

4.1.2.1. Description[edit source]

An object detection Neural Network model will allow you to identify and locate a known object within an image.

No categories assignedEdit