This article explains how to use the people tracking and heatmap application based on OpenVX [1] back-end.
1. Description[edit | edit source]
The people tracking and heatmap neural network model allows the identification and localization of people within an image, with the possibility to enable a tracker and a heatmap based on the current and previous locations of people in the scene. The models used with this application are the YoloV8, this is a state-of-art deep learning model for object detection.
The application demonstrates a computer vision use case where frames are captured from a camera input (/dev/videox) and analyzed by a neural network model executed using the OpenVX framework.
A Gstreamer pipeline is used to stream camera frames (using v4l2src), to execute neural network inference (using appsink) and to send the images (using udpsink) along with the neural network information through ethernet.
The inference result is displayed on a remote PC connected to the STM32MPU board. The overlay is created using OpenCV and Tkinter.
The model used in this application is YoloV8, downloaded from the ultralytics github fork repository[2].
![]() |
For this application, a TensorFlow Lite per-tensor asymmetric quantized model is used, which is accelerated using the neural processing unit (NPU). The model is then converted to NBG format using ST Edge AI tool. For more information about this tool, refer to the dedicated article. |
2. Installation on target[edit | edit source]
2.1. Install from the OpenSTLinux AI package repository[edit | edit source]
After configuring the AI OpenSTLinux package, install X-LINUX-AI components for the semantic segmentation application:
2.2. Source code location[edit | edit source]
- In the OpenSTLinux Distribution with X-LINUX-AI Expansion Package:
- <Distribution Package installation directory>/layers/meta-st/meta-st-x-linux-ai/recipes-samples/people-tracking-heatmap/files/stai_mpu
- On the target:
- /usr/local/x-linux-ai/people-tracking-heatmap/stai_mpu_people_tracking_heatmap.py
- On GitHub:
3. How to use the application[edit | edit source]
3.1. Launching via the demo launcher[edit | edit source]
![]() |
The application cannot be launched using the demo launcher or the command line. |
When you click the icon to run the people-tracking and heatmap Python OpenVX application, a message appears:
This application is launched from the host PC by using the docker application.
To learn to run this application, refer to the next section.
4. Installation on host PC[edit | edit source]
To simplify the deployment of the host PC application, a Docker container is available with all the source files. This docker is based on Ubuntu 20.04, all the environment is properly configured in the Dockerfile.
4.1. Build docker[edit | edit source]
![]() |
Docker must be installed properly on the host PC to be able to execute this part |
First, build the Docker image build from the sources provides in the meta-st-x-linux-ai repository. Navigate to the host-script directory:
cd <Distribution Package installation directory>/layers/meta-st/meta-st-x-linux-ai/host-script/st_people_tracking_heatmap_host.tar.xz
Uncompress the archive:
tar -xJvf st_people_tracking_heatmap_host.tar.xz && cd st_people_tracking_heatmap_host/docker/
Build the Docker image:
docker build --network=host --build-arg http_proxy --build-arg https_proxy --no-cache -t st_people_tracking_heatmap_host .
Set a screen local link:
xhost +local:
Start the Docker container:
docker run --network=host -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix:ro -it --rm --name my-running-app st_people_tracking_heatmap_host
The following overlay appears on the screen:
To start the application, write the host PC IP address and the board IP address in the Host PC IP and Board IP fields, respectively.
Once the application starts, the camera stream appears. You can now use all the other options to customize the demo\s appearance.
5. Tracking explanation[edit | edit source]
The tracking consists of three steps:
- Object detection:
- The neural network detects objects in individual video frames. It identifies the presence and location of objects.
- Object identification:
- Each detected object is assigned a unique identifier. This helps in distinguishing between different instances of the same object class (e.g., multiple people).
- Object tracking:
- The model keeps track of each object's movement across consecutive frames. It predicts the object's position in the next frame based on its current trajectory and speed.
6. Heatmap explanation[edit | edit source]
A heatmap is a visual representation context that shows the concentration or frequency of detected people in a specific areas over time. Here's how it works with a YoloV8 model:
- Detection and coordinate mapping:
- YOLOv8 detects people in each video frame and provides their coordinates (e.g., the bottom center of the bounding box).
- Heat accumulation:
- When a person is detected at a particular coordinate, the corresponding location in the heatmap increases in intensity or "heat".
- If a person remains stationary, the heat at that location continues to increase, indicating a higher concentration of presence.
- Heat dissipation:
- If a person moves away from a location, the heat in that cell gradually decreases over time.
- This simulates the fading of presence, reflecting less frequent occupancy.
- Visualization:
- The heatmap is visualized using colors ranging from cool (e.g., blue) to hot (e.g., red). Hotter colors indicate areas with higher concentrations of people over time.
7. Using application options[edit | edit source]
Once the application is running, several options can be enabled or disabled. On the right, there are three tabs: "Connection", "Customization" and "Controls".
- Connection tab:
- this tab is used only to enter the IP addresses of the host PC and of the target. Once this is done, it is no longer needed.
- this tab is used only to enter the IP addresses of the host PC and of the target. Once this is done, it is no longer needed.
- Customization tab
- This tab offers overlay customization functionalities such as theme selection, video preview size selection, and the possibility to hide or display the text box.
- This tab offers overlay customization functionalities such as theme selection, video preview size selection, and the possibility to hide or display the text box.
- The Controls tab
- This tab is separated into two sections.
- Tracking:
This section allows the user to enable or disable the tracking mechanism and enable or disable the trace following the detected people when tracking is activated. - Heatmap:
This section allows the user to enable or disable the heatmap and choose between three different heatmaps: Live, 1 hour, Infinite. Additionally, it allows scaling the last heatmap called "Infinite" usinga scroll bar.- The Live view heats and cools very quickly.
- The 1 hour view displays information from the past hour, taking more time to heat and cool down.
- The Infinite view shows all information since the application started, with the scroll bar reducing the maximum value to display positions as heat.
- Tracking:
- This tab is separated into two sections.
8. References[edit | edit source]