This article explains how to use the teacher-student learning feature for object detection applications with on-device automatic labeling on JupyterLab [1].
1. Description[edit | edit source]
The goal of this demonstration application is to illustrate the concept of transfer learning using the ONNXRuntime training API [2] on an STM32MP2 series' boards device. We will use a custom dataset for which data is directly retrieved, processed, and labeled for training directly on the device. The complete workflow is running on JupyterLab which a web-based interactive development environment for notebooks that allows users to configure and arrange workflows in data science and machine learning.
2. Installation[edit | edit source]
2.1. Install from the OpenSTLinux AI package repository[edit | edit source]
After having configured the AI OpenSTLinux package, It is possible to install X-LINUX-AI components for on-device learning application.
2.2. Install and configure the JupyterLab on target[edit | edit source]
First, install the JupyterLab server on the STM32MP2 series' boards .
- To install the JupyterLab package, use the following command:
apt-get install jupyter-lab-service
Once the package is installed, it is required to configure the JupyterLab server to be running on the right directory.
- To configure the JupyterLab server, use the following command:
sed -i "s|# c.FileContentsManager.root_dir = |c.FileContentsManager.root_dir = '/usr/local/x-linux-ai/on-device-learning/'|" /home/jupyter/.jupyter/jupyter_server_config.py systemctl restart jupyterlab-session
2.3. Install the teacher-student JupyterLab-based application[edit | edit source]
After having installed the JupyterLab server that will be hosting the application, install the application package itself.
- To install the teacher-student JupyterLab-based application package, use the following command:
x-linux-ai -i odl-teacher-student-app-jupyterlab
![]() |
To have an application running in standalone mode, the installation of the package comes along with the training artifacts of the student model SSD MobileNet V2 required for running the on-device learning feature and the teacher model RT-DETR required for annotating the data. However, the user is free to generate his own training artifacts by following the dedicated article. |
systemctl restart weston-graphical-session.service
- Then, restart the demo launcher:
2.4. Export the teacher model: RT-DETR (optional)[edit | edit source]
The application package comes along with the teacher model integrated. It can be found in the following path on the target /usr/local/x-linux-ai/on-device-learning/teacher_model/rt-detr/rtdetr-l.onnx. However, it is possible for the user to export it on his own by first installing the Ultralytics [3] by running the following command:
pip install ultralytics
Once the package is installed on your host machine, set to export the RT-DETR model to ONNX [4]format by running the following script on the host machine:
from ultralytics import RTDETR
# Load a model
model = RTDETR("rtdetr-l.pt")
# Export the model to ONNX format
path = model.export(format="onnx", imgsz=256)
Notice the generation of a new rtdetr-l.onnx model to deploy to the target to the same location on the board:
scp <path/to/model>/rtdetr-l.onnx <your-board-ip-addr>:/usr/local/x-linux-ai/on-device-learning/teacher_model/rt-detr/
2.5. Generate the student model training artifacts (optional)[edit | edit source]
The application package comes along with the student model training artifacts installed. It is possible find it in the following path on the target /usr/local/x-linux-ai/on-device-learning/student_model/ssd_mobilenet_v2/training_artifacts/. However, it is possible for the user to export it on their own by following the dedicated article. Deploy them to your application by running the command below:
scp <path/to/model>/*_model.onnx <your-board-ip-addr>:/usr/local/x-linux-ai/on-device-learning/student_model/training_artifacts/ scp <path/to/model>/checkpoint <your-board-ip-addr>:/usr/local/x-linux-ai/on-device-learning/student_model/training_artifacts/
3. How to run the application[edit | edit source]
After having configured and installed the application package, make sure that the STM32MP2 series' boards is connected to the same network as the host machine. Open a web browser window on the host machine and navigate to:
http://<YOUR_BOARD_IP_ADDR>:8888
When connecting for the first time, a JupyterLab window should ask you to type a password and login as shown below:
The left tab the filesystem of your application contains:
- The odl_teacher_student_obj_detect.ipynb file: the notebook that will be ran on JupyterLab.
- The student_model directory: containing the training artifacts of the SSD MobileNet V2 previously installed.
The default password is stm32mp.
- The teacher_model directory: containing the ONNX inference model of the RT-DETR previously installed.
- The dataset directory: where all the images and annotations will be saved during the data collection and data annotation phase.
These are the main relevant elements to run the application hosted on the JupyterLab server. The rest of the files and directories are used by GTK UI based demo application. The goal of this Jupyter notebook is to guide you through all the steps of the teacher-student workflow and explore the concept of transfer learning using a sample dataset and evaluate the ORT training API on an STM32MP257 device. We will use a custom dataset, where the data is directly acquired, processed, and labeled for training on the device itself which enhances the data privacy aspects.
To run the notebook, execute each cell sequentially by selecting the cell and pressing Shift + Enter or clicking the Run button in the toolbar. Each cell contains a specific step in the workflow, such as importing libraries, loading the dataset, preprocessing data, training the model, and evaluating the results. By running the cells interactively, you can observe the outputs, debug issues, and make modifications in real time.
This interactive approach makes it an ideal tool for prototyping, learning, and demonstrating machine learning workflows such as teacher-student use case.
![]() |
Unlike the on-device-learning UI based application, the JupyterLab-based application does not provide the feature for real-time inferencing using a video stream from the camera sensor. For the sake of simplicity, only the inferencing on images is supported. |
4. References[edit | edit source]