FP-AI-MONITOR2 user manual

Sensing is a major part of the smart objects and equipment, for example, condition monitoring for predictive maintenance, which enables context awareness and production performance improvement, and results in a drastic decrease in downtime due to preventive maintenance.

The FP-AI-MONITOR2 function pack is a multi-sensor AI data monitoring framework on the wireless industrial node, for STM32Cube. It helps to jump-start the implementation and development of sensor-monitoring-based applications designed with the X-CUBE-AI, an Expansion Package for STM32Cube, or with the NanoEdgeTM AI Studio, an autoML tool for generating AI models for tiny microcontrollers. It covers the entire design of the Machine Learning cycle from the data set acquisition to the integration and deployment on a physical node.

The FP-AI-MONITOR2 runs learning and inference sessions in real-time on the SensorTile Wireless Industrial Node development kit box (STEVAL-STWINBX1), taking data from onboard sensors as input. The FP-AI-MONITOR2 implements a wired interactive CLI to configure the node and manage the learn and detect phases. For simple in-the-field operation, a standalone battery-operated mode is also supported, which allows basic controls through the user button, without using the console.

The STEVAL-STWINBX1 has an STM32U585AIIxQ microcontroller, which is an ultra-low power Arm® Cortex®-M33 MCU, with FPU and TrustZone® at 160 MHz, 2 Mbytes of flash memory and 786 Kbytes of SRAM. In addition, the STEVAL-STWINBX1 embeds industrial-grade sensors, including a 6-axis IMU, 3-axis accelerometer and vibrometer, and analog microphones to record any inertial, vibrational, and acoustic data on the field with high accuracy at high sampling frequencies.

The rest of the article discusses the following topics:

  • The general information about the FP-AI-MONITOR2,
  • Setting up the hardware and software components,
  • Button-operated modes,
  • Command-line interface (CLI),
  • Human activity recognition, a classification application using accelerometer data, and a pre-trained AI model powered by X-CUBE-AI,
  • Anomaly detection using NanoEdgeTM AI,
  • Anomaly classification using NanoEdgeTM AI classification libraries based on vibration and inertial data,
  • Dual-mode to run the anomaly detection using the NanoEdgeTM AI library and state classification of a USB fan based on the ultrasonic data from an analog microphone, using a pre-trained model powered by X-CUBE-AI,
  • Performing the data logging using onboard vibration sensors and a prebuilt binary of FP-SNS-DATALOG2, and
  • Some links to useful online resources, to help a user better understand and customize the project for specific needs.
Info white.png Information
NOTE: The NanoEdgeTM library generation itself is out of the scope of this function pack and must be generated using NanoEdgeTM AI Studio.

1. General information

1.1. Feature overview

  • Complete firmware to program an STM32U5 sensor node for an AI-based sensor monitoring application on the STEVAL-STWINBX1 SensorTile wireless industrial node
  • Support for the classical Machine Learning (ML) and Artificial Neural Network (ANN) models generated by the X-CUBE-AI, an STM32Cube Expansion Package
  • Support for the NanoEdge™ AI libraries generated by NanoEdge™ AI Studio for AI-based anomaly detection applications. Easy integration of live libraries by replacing the pre-integrated stub
  • Application example of human activity classification based on motion sensors
  • Application example of combined anomaly detection based on vibration and anomaly classification based on ultrasound
  • Application binary of high-speed datalogger for STEVAL-STWINBX1 data record from any combination of the environmental sensors and microphones configured up to the maximum sampling rate on a microSD™ card
  • Sensor manager firmware module to configure any of the onboard sensors easily, and suitable for production applications
  • eLooM (embedded Light object-oriented fraMework) enabling efficient development of soft real-time, multitasking, event-driven embedded applications on STM32U5 Series microcontrollers
  • Digital processing unit (DPU) firmware module providing a set of processing blocks, which can be chained together, to apply mathematical transformations to the sensor data
  • Configurable autonomous mode controlled by user button,
  • Interactive command-line interface (CLI):
  • Node and sensor configuration
  • Configure the application running either an X-CUBE-AI (ML or ANN) model or a NanoEdgeTM AI Studio anomaly detection model with the learn-and-detect capability.
  • Configure the application concurrently running an X-CUBE-AI ANN model and a NanoEdgeTM AI Studio model with the learn-and-detect capability.
  • Configure the application running a NanoEdgeTM AI Studio model for classification using vibration data.
  • Easy portability across STM32 microcontrollers using the STM32Cube ecosystem
  • Free and user-friendly license terms

1.2. Software architecture

The top-level architecture of the FP-AI-MONITOR2 function pack is shown in the following figure.

BlockDiagram FP-AI-MONITOR2.png

The STM32Cube function packs leverage the modularity and interoperability of STM32 Nucleo and expansion boards running STM32Cube MCU Packages and Expansion Packages to create functional examples representing some of the most common use cases in certain applications. The function packs are designed to fully exploit the underlying STM32 ODE hardware and software components to best satisfy the final user application requirements.

Function packs may include additional libraries and frameworks, not present in the original STM32Cube Expansion Packages, which enable new functions and create more targeted and usable systems for developers.

STM32Cube ecosystem includes:

  • A set of user-friendly software development tools to cover project development from the design to the implementation, among which are:
    • STM32CubeMX, a graphical software configuration tool that allows the automatic generation of C initialization code using graphical wizards
    • STM32CubeIDE, an all-in-one development tool with peripheral configuration, code generation, code compilation, and debug features
    • STM32CubeProgrammer (STM32CubeProg), a programming tool available in graphical and command-line versions
    • STM32CubeMonitor (STM32CubeMonitor, STM32CubeMonPwr, STM32CubeMonRF, STM32CubeMonUCPD) powerful monitoring tools to fine-tune the behavior and performance of STM32 applications in real-time.
  • STM32Cube MCU & MPU Packages, comprehensive embedded-software platforms specific to each microcontroller and microprocessor series (such as STM32CubeU5 for the STM32U5 Series), which include:
    • STM32Cube hardware abstraction layer (HAL), ensuring maximized portability across the STM32 portfolio
    • STM32Cube low-layer APIs, ensuring the best performance and footprints with a high degree of user control over the HW
    • A consistent set of middleware components such as Azure RTOS ThreadX kernel, USBX usb stack and FileX file system,
    • All embedded software utilities with full sets of peripheral and application examples
  • STM32Cube Expansion Packages, which contain embedded software components that complement the functionalities of the STM32Cube MCU & MPU Packages with:
    • Middleware extensions and application layers
    • Examples running on some specific STMicroelectronics development boards

To access and use the sensor expansion board, the application software uses:

  • STM32Cube hardware abstraction layer (HAL): provides a simple, generic, and multi-instance set of generic and extension APIs (application programming interfaces) to interact with the upper layer applications, libraries, and stacks. It is directly based on a generic architecture and allows the layers that are built on it, such as the middleware layer, to implement their functions without requiring the specific hardware configuration for a given microcontroller unit (MCU). This structure improves library code reusability and guarantees easy portability across other devices.
  • Board support package (BSP) layer: Supports the peripherals on the STM32 Nucleo boards.

1.3. Folder structure

FP-AI-MONITOR2 folder structure

The figure above shows the contents of the function pack folder. The content of each of these subfolders is as follows:

  • Documentation: Contains a compiled .chm file generated from the source code, which details the software components and APIs.
  • Drivers: Contains the HAL drivers, the board-specific drivers for each supported board or hardware platform (including the onboard components), and the CMSIS vendor-independent hardware abstraction layer for the Cortex®-M processors.
  • Middlewares: Contains libraries and protocols for ST parts as well as for third parties. The ST components include the eLoom libraries, X-CUBE-AI runtime libraries, NanoEdgeTM AI library substitutes, Audio preprocessing library, FFT library, and USB device library.
  • Projects: Contains sample application software, which can be used to program the sensor board for classification and anomaly detection applications using the data from the inertial sensors.
  • Utilities: Contains python scripts and sample datasets. These pythonTM scripts can be used to create
  • Human Activity Recognition (HAR) models using
    • Convolutional Neural Networks (CNN)
    • Support Vector based Classifier (SVC)
  • Ultrasound Classification (USC) models using Convolutional Neural Networks (CNN)
  • data preparation scripts for anomaly detection library generation from NanoEdgeTM AI Studio.
  • data preparation scripts for anomaly classification library generation from NanoEdgeTM AI Studio, and
  • data preparation scripts for HAR with NanoEdgeTM AI Studio.

1.4. Terms and definitions

Acronym Definition
API Application Programming Interface
BSP Board Support Package
CLI Command-Line Interface
FP Function Pack
HAL Hardware Abstraction Layer
MCU Microcontroller Unit
ML Machine Learning
AI Artificial Intelligence
NEAI NanoEdgeTM AI
SVC Support Vector Classifier
SVM Support Vector Machine
ANN Artificial Neural Network
CNN Convolutional Neural Network
ODE Open Development Environment
USC Ultrasound Classification
MFCC Mel-Frequency Cepstral Coefficient
Acronyms used in this article.

1.5. References

References Description Source
[1] X-CUBE-AI X-CUBE-AI
[2] NanoEdgeTM AI Studio st.com/nanoedge
[3] STEVAL-STWINBX1 STWINBX1

1.6. Licenses

FP-AI-MONITOR2 is delivered under the Mix Ultimate Liberty+OSS+3rd-party V1 software license agreement (SLA0048).

The software components provided in this package come with different license schemes as described in the table below.

Software component Copyright License
Arm® Cortex®-M CMSIS Arm Limited Apache License 2.0
Azure RTOS ThreadX Microsoft Corporation Microsoft Software License Terms
Azure RTOS USBX Microsoft Corporation Microsoft Software License Terms
STM32U5xx_HAL_Driver STMicroelectronics BSD-3-Clause
STM32U5xx CMSIS Arm Limited - STMicroelectronics Apache License 2.0
eLooM application framework STMicroelectronics Proprietary
PythonTM scripts STMicroelectronics BSD-3-Clause
Dataset STMicroelectronics Proprietary
Sensor Manager STMicroelectronics Proprietary
Audio preprocessing library STMicroelectronics Proprietary
X-CUBE-AI runtime library STMicroelectronics Proprietary
X-CUBE-AI models STMicroelectronics Proprietary
NanoEdgeTM AI library stub STMicroelectronics Proprietary
Signal processing library STMicroelectronics Proprietary
Digital processing unit (DPU) STMicroelectronics Proprietary

2. Hardware and firmware setup

2.1. HW prerequisites and setup

To use the FP-AI-MONITOR2 function pack on STEVAL-STWINBX1, the following hardware items are required:

  • STEVAL-STWINBX1 development kit board,
  • a microSD™ card and card reader to log and read the sensor data,
  • Windows® powered laptop/PC,
  • One USB C-type cable, to connect the sensor board to the PC
  • One USB micro-B cable, for the STLINK-V3MINI, and
  • An STLINK-V3MINI.
FP-AI-MONITOR2-hardware.png

2.1.1. Presentation of the Target STM32 board

The STWIN.box (STEVAL-STWINBX1) is a development kit and reference design that simplifies prototyping and testing of advanced industrial sensing applications in IoT contexts such as condition monitoring and predictive maintenance.

It is powered with Ultra-low-power Arm® Cortex®-M33 MCU with FPU and TrustZone® at 160 MHz, 2 MBytes of flash memory (STM32U585AI)

It is an evolution of the original STWIN kit (STEVAL-STWINKT1B) and features a higher mechanical accuracy in the measurement of vibrations, improved robustness, an updated BoM to reflect the latest and best-in-class MCU and industrial sensors, and an easy-to-use interface for external add-ons.

The STWIN.box kit consists of an STWIN.box core system, a 480mAh LiPo battery, an adapter for the ST-LINK debugger, a plastic case, an adapter board for DIL 24 sensors, and a flexible cable.

Other features:

  • MicroSD card slot for standalone data logging applications
  • On-board Bluetooth® low energy v5.0 wireless technology (BlueNRG-M2), Wi-Fi (EMW3080) and NFC (ST25DV04K)
  • Option to implement authentication and brand protection secure solution with STSAFE-A110
  • Wide range of industrial IoT sensors:
    • Ultra-wide bandwidth (up to 6 kHz), low-noise, 3-axis digital vibration sensor (IIS3DWB)
    • 3D accelerometer + 3D gyro iNEMO inertial measurement unit (ISM330DHCX) with Machine Learning Core
    • High-performance ultra-low-power 3-axis accelerometer for industrial applications (IIS2DLPC)
    • Ultra-low power 3-axis magnetometer (IIS2MDC)
    • High-accuracy, high-resolution, low-power, 2-axis digital inclinometer with Embedded Machine Learning Core (IIS2ICLX)
    • Dual full-scale, 1.26 bar and 4 bar, absolute digital output barometer in full-mold package (ILPS22QS)
    • Low-voltage, ultra-low-power, 0.5°C accuracy I²C/SMBus 3.0 temperature sensor (STTS22H)
    • Industrial grade digital MEMS microphone (IMP34DT05)
    • Analog MEMS microphone with a frequency response of up to 80 kHz (IMP23ABSU)
  • Expandable via a 34-pin FPC connector

2.2. Software requirements

2.2.1. FP-AI-MONITOR2

  • Download the latest version of the FP-AI-MONITOR2, package from ST website, extract and copy the .zip file contents into a folder on the PC. The package contains binaries, source code, and utilities for the sensor board STEVAL-STWINBX1.

2.2.2. IDE

Info white.png Information
All the steps presented in this document are carried out with STM32CubeIDE, but any of the other two IDEs could have been used.

2.2.3. STM32CubeProgrammer

  • STM32CubeProgrammer is an all-in-one multi-OS software tool for programming STM32 products. It provides an easy-to-use and efficient environment for reading, writing, and verifying device memory through both the debug interface (JTAG and SWD) and the bootloader interface (UART, USB DFU, I2C, SPI, and CAN). STM32CubeProgrammer offers a wide range of features to program STM32 internal memories (such as flash, RAM, and OTP) as well as external memory. Download the latest version of the STM32CubeProgrammer. The FP-AI-MONITOR2 is tested with the STM32CubeProgrammer version 2.12.0.
  • This software is available from STM32CubeProg.

2.2.4. Tera Term

  • Tera Term is an open-source and freely available software terminal emulator, which is used to host the CLI of the FP-AI-MONITOR2 through a serial connection.
  • Users can download and install the latest version available from the Tera Term website.

2.2.5. STM32CubeMX

STM32CubeMX is a graphical tool that allows a very easy configuration of STM32 microcontrollers and microprocessors, as well as the generation of the corresponding initialization C code for the Arm® Cortex®-M core or a partial Linux® Device Tree for Arm® Cortex®-A core), through a step-by-step process. Its salient features include:

  • Intuitive STM32 microcontroller and microprocessor selection.
  • Generation of initialization C code project, compliant with IAR™, Keil® and STM32CubeIDE (GCC compilers) for Arm®Cortex®-M core
  • Development of enhanced STM32Cube Expansion Packages thanks to STM32PackCreator, and
  • Integration of STM32Cube Expansion packages into the project.

FP-AI-MONITOR2 requires STM32CubeMX V 6.7.0 or later (tested on 6.7.0). To download the STM32CubeMX and obtain details of all the features please visit st.com.

2.2.6. X-CUBE-AI

X-CUBE-AI is an STM32Cube Expansion Package part of the STM32Cube.AI ecosystem and extending STM32CubeMX capabilities with automatic conversion of pre-trained Artificial Intelligence models and integration of generated optimized library into the user project. The easiest way to use it is to download it inside the STM32CubeMX tool (version 8.0.0 or newer) as described in the user manual Getting started with X-CUBE-AI Expansion Package for Artificial Intelligence (AI) (UM2526). The X-CUBE-AI Expansion Package offers also several means to validate the AI models (both Neural Network and Scikit-Learn models) both on desktop PC and STM32, as well as to measure performance on STM32 devices (Computational and memory footprints) without ad-hoc handmade user C code.

2.2.7. Python 3.10

PythonTM is an interpreted high-level general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant indentation. Its language constructs, as well as its object-oriented approach, aim to help programmers write clear, logical code for small and large-scale projects. To install all the pythonTM dependencies user have to do it in two steps:

  • Install all the dependencies for the High-Speed Datalogger Python SDK. For this, navigate to the FP-AI-MONITOR2_V1.0.0/Utilities/DataLog/HSDPython_SDK/ directory, and launch the command:
      ./HSDPython_SDK_install.bat 
    .
  • To build and export AI models the reader requires to set up a Python environment with a list of packages. The list of the required packages along with their versions is available as a text file in /FP-AI-MONITOR2_V1.0.0/Utilities/AI_Resources/requirements.txt directory. The following command is used in the command terminal of the anaconda prompt or Ubuntu to install all the packages specified in the configuration file requirements.txt:
     pip install -r requirements.txt --upgrade-strategy only-if-needed 

2.2.8. NanoEdgeTM AI Studio

NanoEdgeTM AI Studio is a new Machine Learning (ML) technology to bring true innovation easily to the end-users. In just a few steps, developers can create optimal ML libraries for Anomaly Detection, 1-class classification, n-class classification, and extrapolation, based on a minimal amount of data. The main features of NanoEdgeTM AI Studio are:

  • Desktop tool for design and generation of an STM32-optimized library for anomaly detection and feature classification of temporal and multi-variable signals
  • Anomaly detection libraries are designed using very small datasets. They can learn normality directly on the STM32 microcontroller and detect defects in real time.
  • Classification libraries are designed with a very small, labeled dataset. They classify signals in real time.
  • Supports any type of sensor: vibration, magnetometer, current, voltage, multi-axis accelerometer, temperature, acoustic and more
  • Explore millions of possible algorithms to find the optimal library in terms of accuracy, confidence, inference time, and memory footprint
  • Generate very small footprint libraries running down to the smallest Arm® Cortex®-M0 microcontrollers
  • Embedded emulator to test library performance live with an attached STM32 board or from test data files
  • Easy portability across the various STM32 microcontroller series

This function pack supports the Anomaly Detection and n-class classification libraries generated by NanoEdgeTM AI Studio. It facilitates users to log the data, prepare and condition it to generate the libraries from the NanoEdgeTM AI Studio and then embed these libraries in the FP-AI-MONITOR2. NanoEdgeTM AI Studio is available from www.st.com/stm32nanoedgeai. FP-AI-MONITOR2 is tested using NanoEdgeTM AI Studio V3.3.0.

Info white.png Information
An evaluation version of the NanoEdgeTM AI Studio can be freely downloaded to generate development libraries for selected STM32 Nucleo boards and Discovery kits, like STEVAL-STWINBX1.

.

2.3. Program firmware into the STM32 microcontroller

This section explains how to select a binary file for the firmware and program it into the STM32 microcontroller. A precompiled binary file is delivered as part of the FP-AI-MONITOR2 function pack. It is located in the FP-AI-MONITOR2_V1.0.0\Projects\STWIN.box\Applications\FP-AI-MONITOR2\Binary\ folder. When the STM32 board and PC are connected through the USB cable on the STLINK-V3E connector, the STEVAL-STWINBX1 appears as a drive on the PC. The selected binary file for the firmware can be installed on the STM32 board by simply performing a drag-and-drop operation as shown in the figure below. This creates a dialog to copy the file and once it is disappeared (without any error) this indicates that the firmware is programmed in the STM32 microcontroller.

FP-AI-MONITOR2 flash.png

2.4. Using the serial console

A serial console is used to interact with the host board (Virtual COM port over USB). With the Windows® operating system, the use of the Tera Term software is recommended. Following are the steps to configure the Tera Term console for CLI over a serial connection.

2.4.1. Set the serial terminal configuration

Start Tera Term, select the proper connection, featuring the [USB Serial Device]. For the screenshot below this is COM5 but it could vary for different users.

FP-AI-MONITOR2 teraterm new connection.svg

Set the Terminal parameters:

FP-AI-MONITOR2 teraterm terminal setup.svg

The interactive serial console can be used with the default values.

2.4.2. Start FP-AI-MONITOR2 firmware

Restart the board by pressing the RESET button. The following welcome screen is displayed on the terminal.

FP-AI-MONITOR2 Console Welcome Message

From this point, start entering the commands directly or type help to get the list of available commands along with their usage guidelines.

Info white.png Information
Note: The firmware provided is generated with a HAR model based on SVC and users can test it straight out of the box. However, for the NanoEdgeTM AI Library, a substitute is provided in place of the library. The user must generate the library with the help of the NanoEdgeTM AI Studio, replace the substitute with this library, and rebuild the firmware. These steps are described in detail in the article later.

3. Button-operated modes

This section provides details of the button-operated mode for FP-AI-MONITOR2. The purpose of this mode is to enable the users to operate the FP-AI-MONITOR2 on STWIN.box even in the absence of the CLI console.

In button-operated mode, the sensor node can be controlled through the user button instead of the interactive CLI console. The default values for node parameters and settings for the operations during auto-mode are provided in the firmware. Based on these configurations, different modes such as dual_mode, neai_learn, and "neai_detect" can be started and stopped through the user button on the node.

3.1. Interaction with user

The button-operated mode can work with or without the CLI and is fully compatible and consistent with the current definition of the serial console and its command-line interface (CLI).

The supporting hardware for this version of the function-pack (STEVAL-STWINBX1) is fitted with three buttons:

  1. User Button, the only button usable by the SW,
  2. Reset Button, connected to STM32 MCU reset pin,
  3. Power Button connected to power management,

and three LEDs:

  1. LED_1 (green), controlled by Software,
  2. LED_2 (orange), controlled by Software,
  3. LED_C (red), controlled by Hardware, indicates charging status when powered through a USB cable.

So, the basic user interaction for button-operated operations is to be done through two buttons (user and reset) and two LEDs (green and orange). The following provides details on how these resources are allocated to show the users what execution phases are active or to report the status of the sensor node.

3.1.1. Button Allocation

Power button allows to control powering of the device when connected to its loaded battery:

Button Press Description Action
LONG_PRESS The button is pressed for more than (200 ms) and released powers up the device
SHORT_PRESS The button is pressed for less than (200 ms) and released powers down the device


In the extended autonomous mode, the user can trigger any of the three execution phases. The available modes are:

  1. idle: the system is waiting for a command.
  2. dual: runs the X-CUBE-AI library and prints the results of the live inference on the CLI (if CLI is available) and the NanoEdgeTM AI library to detect anomalies.
  3. neai_learn: All data coming from the sensor is passed to the NanoEdgeTM AI library to train the model.

To trigger these phases, the FP-AI-MONITOR2 is equipped with the support of the user button. In the STEVAL-STWINBX1 sensor node, there are two software usable buttons:

  1. The user button: This button is fully programmable and is under the control of the application developer.
  2. The reset button: This button is used to reset the sensor node and is connected to the hardware reset pin, thus is used to control the software reset. It resets the knowledge of the NanoEdgeTM AI libraries, the context variables, and sensor configurations to the default values.

To control the execution phases, we need to define and detect at least three different button press modes of the user button.

The followings are the pressing types available for the user button and their assignments to perform different operations:

Button Press Description Action
SHORT_PRESS The button is pressed for less than (200 ms) and released Starts the dual mode that combines anomaly detection and anomaly classification.
LONG_PRESS The button is pressed for more than (200 ms) and released Starts the anomaly detection learning phase.
ANY_PRESS The button is pressed and released (overlaps with the three other modes) Stops the current running execution phase.

3.1.2. LED Allocation

In the function pack six execution phases exist:

idle: The system waits for user input.
har: All data coming from the sensors are passed to the X-CUBE-AI library to perform HAR.
neai learn: All data coming from the sensors are passed to the NanoEdgeTM AI library to train the model.
neai detect: All data coming from the sensors are passed to the NanoEdgeTM AI library to detect anomalies.
neai class: All data coming from the sensors are passed to the NanoEdgeTM AI library to perform classification.
dual mode: All data coming from the sensors are passed to the NanoEdgeTM AI library to detect anomalies and to the X-CUBE-AI library to classify them.


At any given time, the user needs to be aware of the current active execution phase. We also need to report on the outcome of the detection when the detect execution phase is active, telling the user if an anomaly has been detected, or what activity is being performed by the user with the HAR application when the "ai detect" context is running.

The onboard LEDs indicate the status of the current execution phase by showing which context is running and also by showing the output of the context (anomaly or one of the four activities in the HAR case).

The green LED is used to show the user which execution context is being run.

Pattern Task
OFF -
ON IDLE
BLINK_SHORT X-CUBE-AI Running
BLINK_NORMAL NanoEdgeTM AI learn
BLINK_LONG NanoEdgeTM AI detection or classification or dual-mode

The Orange LED is used to indicate the output of the running context as shown in the table below:

Pattern Reporting
OFF Stationary (HAR) when in X-CUBE-AI mode, Normal Behavior when in NEAI mode
ON Biking (HAR)
BLINK_SHORT Jogging (HAR)
BLINK_LONG Walking (HAR) or anomaly detected (NanoEdgeTM AI detection or dual-mode)

Looking at these LED patterns the user is aware of the state of the sensor node even when CLI is not connected.

4. Command-line interface

The command-line interface (CLI) is a simple method for the user to control the application by sending command-line inputs to be processed on the device.

4.1. Command execution model

The commands are grouped into three main sets:

  • (CS1) Generic commands
    This command set allows the user to get the generic information from the device like the firmware version, UID, compilation date and time, and so on, and to start and stop an execution phase.
  • (CS2) AI commands
    This command set contains commands which are AI-specific. These commands enable users to work with the X-CUBE-AI and NanoEdgeTM AI libraries.
  • (CS3) Sensor configuration commands
    This command set allows the user to configure the supported sensors and to get the current configurations of these sensors.

4.2. Execution phases and execution context

The five system execution phases are:

  • X-CUBE-AI: Data coming from the sensors is passed through the X-CUBE-AI models to run the inference on the HAR classification.
  • NanoEdgeTM AI learning: Data coming from the sensors is passed to the NanoEdgeTM AI library to train the model.
  • NanoEdgeTM AI detection: Data coming from the sensors is passed to the NanoEdgeTM AI library to detect anomalies.
  • NanoEdgeTM AI Classification: Data coming from the sensor is passed to the NanoEdgeTM AI n-class classification library to perform n-class classification.
  • Dual-mode: The data coming from the accelerometer sensor is passed to the NanoEdgeTM AI library to detect the anomalies and if an anomaly is detected the data from the analog microphone is passed to an ANN model to perform the classification of the running class of the setup.

Each execution phase can be started and stopped with a user command $ start <execution phase> issued through the CLI, where valid values for the execution phase are:

  • har,
  • neai_learn,
  • neai_detect,
  • neai_class, and
  • dual.

An execution context, which is a set of parameters controlling execution, is associated with each execution phase. One single parameter can belong to more than one execution context.

The CLI provides commands to set and get execution context parameters. The execution context cannot be changed while an execution phase is active. If the user attempts to set a parameter belonging to any active execution context, the requested parameter is not modified.

4.3. Command summary

Command name Command string Note
CS1 - Generic commands
help help Lists all registered commands with brief usage guidelines. Including the list of the applicable parameters.
info info Shows firmware details and version.
uid uid Shows STM32 UID.
reset reset Resets the MCU system.
CS2 - AI specific commands
start start <phase> Starts an execution phase according to its execution context. Available execution phases are:
  • har for the HAR classification based on a pre-trained model converted using X-CUBE-AI,
  • neai_learn for learning the normal conditions using NanoEdgeTM AI library for anomaly detection,
  • neai_detect for starting the detection of the anomalies using the NanoEdgeTM AI anomaly detection library,
  • neai_class for the classification using the NanoEdgeTM AI classification library and finally,
  • dual for the combination of anomaly detection using the NanoEdgeTM AI anomaly detection library and classification using the pre-trained model converted and deployed using X-CUBE-AI.
neai_init neai_init (Re)Initializes the NanoEdgeTM AI model and forgets any learning made on the device. Used in the beginning or to reinitialize the NanoEdgeTM AI anomaly detection to start from scratch at any time.
neai_set neai_set <parameter> <value> Sets a NanoEdgeTM AI specific parameter. Valid parameters and values are:
  • sensitivity: float [0, 100]
  • threshold: integer [0, 100]
  • signals: integer [0, MAX_SIGNALS]
  • timer: integer [0, MAX_TIME_MS]
  • sensor: integer [0, 10]
neai_get neai_get <parameter> Displays the value of the NanoEdgeTM AI parameter. Valid parameters are:
  • sensitivity
  • threshold
  • signals
  • timer
  • sensor
  • all.
har_set har_set <parameter> <value> Sets a X-CUBE-AI HAR specific parameter. Valid parameters are:
  • sensor: integer [0, 10]
har_get har_get <parameter> Displays the value of the X-CUBE-AI HAR parameters. Valid parameters are:
  • info: information on the models
  • sensor: Associated sensor id
  • all: All the information regarding X-CUBE-AI model for HAR.
dual_set dual_set <parameter> <value> Sets a dual AI specific parameter. Valid parameters and values are:

sensor: The sensor for classification, values can be integer [0, 10].

dual_get dual_get <parameter> Gets a dual AI specific parameter. Valid parameters could be:
  • info,
  • sensor, and
  • all.
CS3 - Sensor configuration commands
sensor_info sensor_info Get a list of all supported sensors and their ID.
sensor_set sensor_set <id> <parameter> <value> Sets the value of a parameter for a sensor with sensor id provided in id. The settable parameters are:
  • FS: Full scale,
  • ODR: Output data rate, and
  • enable: Active [1 for true (active), 0 for false (inactive)].
sensor_get sensor_get <id> <parameter> Gets the value of a parameter for a sensor with sensor id provided in id. Valid parameters are:
  • enable: Active [ true (active), false (inactive)]
  • ODR: output data rate
  • ODR_List: list of supported ODRs
  • FS: Full scale
  • FS_List: list of supported FSs
  • all: All sensor configurations.
Available commands in command-line interface (CLI) application of FP-AI-MONITOR2

4.4. Configuring the sensors

Through the CLI interface, a user can configure the supported sensors for sensing and condition monitoring applications. The list of all the supported sensors can be displayed on the CLI console by entering the command sensor_info. This command prints the list of the supported sensors along with their ids as shown in the image below. The user can configure these sensors using these ids. The configurable options for these sensors include:

  • enable: to activate or deactivate the sensor,
  • ODR: to set the output data rate of the sensor from the list of available options, and
  • FS: to set the full-scale range from the list of available options.

The current value of any of the parameters for a given sensor can be printed using the command,

$ sensor_get <sensor_id> <param>

or all the information about the sensor can be printed using the command:

$ sensor_get <sensor_id> all

Similarly, the values for any of the available configurable parameters can be set through the command:

$ sensor_set <sensor_id> <param> <val>

The snippet below shows the complete example of getting and setting these values along with old and changed values.

$ sensor_info
imp34dt05  ID=0 , type=MIC
iis3dwb    ID=1 , type=ACC
ism330dhcx ID=2 , type=ACC
ism330dhcx ID=3 , type=GYRO
imp23absu  ID=5 , type=MIC
iis2iclx   ID=6 , type=ACC
stts22h    ID=7 , type=TEMP
ilps22qs   ID=8 , type=PRESS
iis2dlpc   ID=9 , type=ACC
iis2mdc    ID=10, type=MAG
-------
10 sensors supported

$ sensor_get 1 all
enable = false
nominal ODR = 26667.00 Hz, latest measured ODR = 0.00 Hz
Available ODRs:
26667.00 Hz
fullScale = 16.00 g
Available fullScales:
2.00 g
4.00 g
8.00 g
16.00 g

$ sensor_set 1 enable 1
sensor 1: enable

$ sensor_set 1 FS 4
sensor FS: 4.00

$ sensor_get 1 all
enable = true
nominal ODR = 26667.00 Hz, latest measured ODR = 0.00 Hz
Available ODRs:
26667.00 Hz
fullScale = 4.00 g
Available fullScales:
2.00 g
4.00 g
8.00 g
16.00 g
Warning white.png Warning
Make sure sensor settings are compatible with the model it is attached to, and hardware capacities. If real time conditions are not met, the CLI can freeze

5. Available Applications

5.1. Inertial data classification with STM32Cube.AI

The CLI application comes with a prebuilt Human Activity Recognition model. This functionality is started by typing the command:

$ start har

Note that the provided HAR model is built with a dataset created using the ISM330DHCX_ACC sensor with ODR = 26, and FS = 4. To achieve the best performance, the user must set these parameters to the sensor configuration using the sensor_set command as provided in the Command Summary table. Running the $ start har command starts doing the inference on the accelerometer data and predicts the performed activity along with the confidence. The supported activities are:

  • Stationary,
  • Walking,
  • Jogging, and
  • Biking.

The following figure is a screenshot of the normal working session of the har command in the CLI application.

FP-AI-MONITOR2 har use case.png

5.2. Anomaly detection with NanoEdgeTM AI library

FP-AI-MONITOR2 includes a pre-integrated stub which is easily replaced by an AI condition monitoring library generated and provided by NanoEdgeTM AI Studio. This stub simulates the NanoEdgeTM AI-related functionalities, such as running learning and detection phases on the edge.

The learning phase is started by issuing a command $ start neai_learn from the CLI console or by long pressing the [USR] button. The learning process is reported either by slowly blinking the green LED light on STEVAL-STWINBX1 or in the CLI as shown below:

$ NanoEdge AI: Learn
CTRL: This is a stubbed version, please install the NanoEdge AI library!
{"signal": 1, "status": "need more signals"},
{"signal": 2, "status": "need more signals"},
:
:
{"signal": 10, "status": success}
{"signal": 11, "status": success}
:
:
End of the execution phase

The CLI shows that the learning is being performed at every signal learned. The NanoEdgeTM AI library requires to learn for at least ten samples, so for all the samples until the ninth sample a status message saying 'need more signals' is printed along with the signal id. Once, ten signals are learned the status of 'success' is printed. The learning can be stopped by pressing the ESC key on the keyboard or simply by pressing the [USR] button.

Similarly, the user starts the condition monitoring process by issuing the command $ start neai_detect. This starts the inference phase. The anomaly detection phase checks the similarity of the presented signal with the learned normal signals. If the similarity is less than the set threshold default: 90%, a message is printed in the CLI showing the occurrence of an anomaly along with the similarity of the anomaly signal. The process is stopped by pressing the ESC key on the keyboard or pressing the [USR] button. This behavior is shown in the snippet below:

$ start neai_detect
NanoEdgeAI: starting detection phase...

$ NanoEdge AI: detection
CTRL: This is a stubbed version, please install the NanoEdge AI library!
{"signal": 1, "similarity": 0, "status": anomaly},
{"signal": 2, "similarity": 1, "status": anomaly},
{"signal": 3, "similarity": 2, "status": anomaly},
:
:
{"signal": 90, "similarity": 89, "status": anomaly},
{"signal": 91, "similarity": 90, "status": anomaly},
{"signal": 102, "similarity": 0, "status": anomaly},
{"signal": 103, "similarity": 1, "status": anomaly},
{"signal": 104, "similarity": 2, "status": anomaly},
End of the execution phase

Other than CLI, the status is also presented using the LED lights on the STEVAL-STWINBX1. Fast blinking green LED light shows the detection is in progress. Whenever an anomaly is detected, the orange LED light is blinked twice to report an anomaly. If not enough signals (at least 10) are learned, a message saying "need more signals" with a similarity value equals to 0 appears.

NOTE : This behavior is simulated using a STUB library where the similarity starts from 0 when the detection phase is started and increments with the signal count. Once the similarity is reached to 100 it resets to 0. One can see that the anomalies are not reported when the similarity is between 90 and 100.

Info white.png Information
Note : The message CTRL: This is a stubbed version, please install the NanoEdge AI library! shows that the library embedded in the function pack is just a stub and a real library is not present. This message is replaced by a new message saying CTRL: Powered by NanoEdge AI Library! once a real library is embedded.

Additional parameters in condition monitoring

For user convenience, the CLI application also provides handy options to easily fine-tune the inference and learning processes. The list of all the configurable variables is available by issuing the following command:

$ neai_get all
signals     = 0
sensitivity = 1.00
threshold   = 90%
timer       = 0 ms
sensor      = 2

Each of these parameters is configurable using the neai_set <param> <val> command.

This section provides information on how to use these parameters to control the learning and detection phase. By setting the "signals" and "timer" parameters, the user can control how many signals or for how long the learning and detection are performed (if both parameters are set the learning or detection phase stops whenever the first condition is met). For example, to learn 10 signals, the user issues this command, before starting the learning phase as shown below.

$ neai_set signals 10
signals set to 10

$ start neai_learn
NanoEdgeAI: starting learn phase...

$ NanoEdge AI: Learn
CTRL: This is a stubbed version, please install the NanoEdge AI library!
{"signal": 1, "status": "need more signals"},
{"signal": 2, "status": "need more signals"},
...
{"signal": 9, "status": "need more signals"},
{"signal": 10, "status": "success"},
End of the execution phase

If both of these parameters are set to "0" (default value), the learning and detection phases run indefinitely.

The threshold parameter is used to report any anomalies. For any signal which has similarities below the threshold value, an anomaly is reported. The default threshold value used in the CLI application is 90. Users can change this value by using neai_set threshold <val> command.

Finally, the sensitivity parameter is used as an emphasis parameter. The default value is set to 1. Increasing this sensitivity means that the signal matching is to be performed more strictly, reducing it relaxes the similarity calculation process, meaning resulting in higher matching values.

Info white.png Information
Note: For the best performance, the user must expose all the normal conditions to the sensor board during the learning and library generation process, for example in the case of motor monitoring, the required speeds and ramps that need to be monitored must be exposed.

For further details on how NanoEdgeTM AI libraries work users are invited to read the detailed documentation of NanoEdgeTM AI Studio.

5.3. n-class classification with NanoEdgeTM AI

This section provides an overview of the classification application provided in FP-AI-MONITOR2 based on the NanoEdgeTM AI classification library. FP-AI-MONITOR2 includes a pre-integrated stub which is easily replaced by an AI classification library generated using NanoEdgeTM AI Studio. This stub simulates the NanoEdgeTM AI-classification-related functionality, such as running the classification by simply iterating between two classes for ten consecutive signals on the edge.

Unlike the anomaly detection library, the classification library from the NanoEdgeTM AI Studio comes with static knowledge of the data and does not require any learning on the device. This library contains the functions based on the provided sample data to best classify one class from another and rightfully assign a label to it when performing the detection on the edge. The classification application powered by NanoEdgeTM AI can be simply started by issuing a command $ start neai_class as shown in the snippet below.

 
$ start neai_class
NanoEdgeAI: starting classification phase...

$ CTRL: This is a stubbed version, please install the NanoEdge AI library!
NanoEdge AI: classification
{"signal": 1, "class": Class1}
{"signal": 2, "class": Class1}
:
:
{"signal": 10, "class": Class1}
{"signal": 11, "class": Class2}
{"signal": 12, "class": Class2}
:
:
{"signal": 20, "class": Class2}
{"signal": 21, "class": Class1}
:
:
End of the execution phase

The CLI shows that for the first ten samples, the class is detected as "Class1" while for the upcoming ten samples "Class2" is detected as the current class. The classification phase can be stopped by pressing the ESC key on the keyboard or simply by pressing the [USR] button.


NOTE : This behavior is simulated using a STUB library where the classes are iterated by displaying ten consecutive labels for one class and then ten labels for the next class and so on.

Info white.png Information
Note : The message CTRL: This is a stubbed version, please install the NanoEdge AI library! shows that the library embedded in the function pack is just a stub and a real library is not present. Once a real library is embedded, this message is replaced by another message saying CTRL: Powered by NanoEdge AI Library! .

5.4. Dual-mode application with STM32Cube.AI and NanoEdgeTM AI

In addition to the three applications described in the sections above the FP-AI-MONITOR2 also provides an advanced execution phase which we call the dual application mode. This mode uses anomaly detection based on the NanoEdgeTM AI library and performs classification using a prebuilt ANN model based on an analog microphone. The dual mode works in a power-saver configuration. A low-power anomaly detection algorithm based on the NanoEdgeTM AI library is always running based on vibration data and an ANN classification based on a high-frequency analog microphone pipeline is only triggered if an anomaly is detected. Other than this both applications are independent of each other. This is also worth mentioning that the dual mode is created to work for a USB fan when the fan is running at the maximum speed and does not work very well when tested at other speeds. The working of the applications is very simple.

To start testing the dual application execution phase, the user first needs to train the anomaly detection library using the $ start neai_learn at the highest speeds of the fan. Once the normal conditions have been learned the user can start the dual application by issuing a simple command as $ start dual as shown in the below snippet:

FP-AI-MONITOR2 dual use case.png

Whenever there is an anomaly detected, meaning a signal with a similarity of less than 90%, the ultrasound-based classifier is started. Both applications run in asynchronous mode. The ultrasound-based classification model takes almost one second of data, then preprocesses it using the Mel-frequency cepstral coefficients (MFCC), and then feeds it to a pre-trained neural network. It then prints the label of the class along with the confidence. The network is trained for four classes [ 'Off', 'Normal', 'Clogging', 'Friction'] to detect fan in 'Off' condition or running in 'Normal' condition at max speed or clogged and running in 'Clogging' condition at maximum speed or finally if there is friction being applied on the rotating axis it is labeled as 'Friction' class. As soon as the anomaly detection detects the class to be normal, the ultrasound-based ANN is suspended.

6. Generating the AI models

6.1. Anomaly detection with NanoEdgeTM AI

The following section shows how to generate an anomaly detection library using NanoEdgeTM AI Studio and install and test it on STEVAL-STWINBX1 using FP-AI-MONITOR2. The steps are provided with brief details in the following figure.

FP-AI-MONITOR1 neai lib generation flow.png

6.1.1. Data logging for normal and abnormal conditions

The details on how to acquire the data are provided in the section data logging using HSDatalog.

Info white.png Information
Note: For the best performances, the user must expose the solution to the normal and abnormal behaviors, that are expected in real use (few signals only). For detailed information, the users are invited to read the detailed documentation of the NanoEdgeTM AI Studio here.

6.1.2. Data preparation for library generation

The data logged through the datalogger is in the binary format and is not user readable nor compliant with the NanoEdgeTM AI Studio format as it is.

Using the provided pythonTM utility scripts on the path /FP-AI-MONITOR2_V1.0.0/Utilities/AI_Resources/NanoEdgeAi/.

  • The Jupyter Notebook NanoEdgeAI_Utilities_AD.ipynb provides a complete example of data preparation for a three-speed fan library generation running in normal and clogged conditions. In addition, there is a HSD_2_NEAISegments.py script provided if the user wants to prepare segments for given data acquisition. This script is used by issuing the following command for data acquisition with ism330dhcx_acc sensor.
  • >> python HSD_2_NEAISegments.py ../Datasets/HSD_Logged_Data/Fan12CM/ism330dhcx/normal/1000RPM/ command generates a file named as ism330dhcx_acc_NanoEdge_segments_0.csv using the default parameter set. The file is generated with segments of length 1024 and stride 1024 plus the first 512 samples are skipped from the file to avoid unstable samples at the start of the acquisition. Users can find the help for this function by typing python HSD_2_NEAISegments.py -h.
  • If multiple acquisitions are used to acquire the data for the normal and abnormal conditions once the files *_NanoEdge_segments_0.csv are generated for different acquisitions, all the files for normal can be combined in a single file by copy-paste action, and all the files for the abnormal in another file. This way the user has to provide only one normal and abnormal file to Studio.

6.1.3. Library generation using NanoEdgeTM AI Studio

Running the scripts in FP-AI-MONITOR2_V1.0.0/Utilities/AI_Resources/NanoEdgeAi/NanoEdgeAI_Utilities_AD.ipynb generates the normal_WL1024_segments.csv and clogged_WL1024_segments.csv files in FP-AI-MONITOR2_V1.0.0/Utilities/AI_Resources/Datasets/HSD-Logged_Data/Fan12CM/ism330dhcx/. This section describes how to generate the anomaly detection libraries using these normal and abnormal (clogged condition) files.

The NanoEdgeTM AI Studio can generate libraries for four types namely:

  • Anomaly Detection
  • 1-class classification
  • n-class classification
  • Extrapolation.

To generate the anomaly detection library the first step is to create the project by clicking on the button for AD as shown in the figure below. The text shows the information on what Anomaly Detection is good for, and the project can be created by clicking on the CREATE NEW PROJECT button.

FP-AI-MONITOR2 NanoEdge DA .png

The process to generate the anomaly detection library consists of seven steps as described in the figure below:

FP-AI-MONITOR2 NanoEdge project settings.png
  1. Project Settings
    • Choose a project name and description.
    • Hardware Configurations
      • Choosing the target STM32 microcontroller type: select STEVAL-STWINBX1 from the list provided in the drop-down menu under Target.
      • Maximum amount of RAM to be allocated for the library: Usually, a few Kbytes is enough (but it depends on the data frame length used in the process of data preparation, 32 Kbytes is a good starting point).
      • Putting a limit or no limits for the maximum flash budget
      • Sensor type: select 3-axis accelerometer from the list in the drop-down under Sensor type.
    • Click on SAVE & NEXT
  2. Providing the sample contextual data for normal segments to adjust and measure the performance of the chosen model. Chose the option FROM FILE and provide the path to normal_WL1024_segments.csv generated using Jupyter notebook,
  3. Providing the sample contextual data for abnormal segments to adjust and measure the performance of the chosen model. Chose the option FROM FILE and provide the path to clogged_WL1024_segments.csv generated using Jupyter notebook,,
  4. Benchmark the available models and choose the one that complies with the requirements and provides the best performance. At the end of the process, the best library is selected for the setup under experimentation. At this stage, an indication of the minimum number of samples to learn is also provided. To get stable results user must run neai_learn for at least this many signals.
  5. Emulator can optionally be used to test this specific NanoEdge AI Library directly inside the Studio, and to verify the performances on some test data.
  6. Validation step provides the details of the model (components under the hood), accuracies, footprints and other details for the users to understand if this is a good model for their project.
  7. The final step is to compile and download the libraries. In this process, the flag "Float abi" has to be checked for using libraries with hardware FPU. All the other flags can be left in the default state.
Info white.png Information
For using the library with μKeil also check fshort-wchar in addition to Float abi

Detailed documentation on the NanoEdgeTM AI Studio.

6.2. n-class classification with NanoEdgeTM AI

The following describes how to generate the NanoEdgeTM AI-based classification library and download it. The figure below shows a full flow of generating the library, integrating and installing it with the firmware, and testing it on the sensor tile to classify different conditions.

FP-AI-MONITOR1 neai class lib generation flow.png

6.2.1. Data logging for different conditions

The details on how to log the data are provided in the section Datalogging using HSDatalog. Using these steps, the user needs a separate acquisition for each of the conditions/classes the user wants to monitor. For simplicity purposes FP-AI-MONITOR2 provides the sample data for a USB fan in four different conditions namely [ Off, Normal, Clogging, Friction]. This data is measured using ism330dhcx and imp23absu sensors and can be found in FP-AI-MONITOR2_V1.0.0/Utilities/AI_Resources/Datasets/HSD_Logged_Data/Fan12CM/ism330dhcx+imp23absu directory. For the configurations we used acc subsensor of ism330dhcx running at ODR = 1666 samples/sec and FS = 4G, and imp23absu running at ODR = 192,000 samples/sec.

6.2.2. Data preparation for n-class classification with NanoEdgeTM AI Studio

The data logged through the datalogger is in the binary format and is not user readable nor compliant with the NanoEdgeTM AI Studio format as it is.

To convert this data to a useful form, for the ease of the users we provided pythonTM utilities to prepare this data to be ready to generate the n-class classification libraries using NanoEdgeTM AI Studio. These scripts are provided in the form of a Jupyter Notebook and can be found in /FP-AI-MONITOR2_V1.0.0/Utilities/AI_Resources/NanoEdgeAi/NanoEdgeAI_Utilities_NCC.ipynb.

Running all the sections of this notebook generates the NanoEdgeTM AI segments for all the acquisitions in the data log folders which can be inputted into the NanoEdgeTM AI Studio in the next step.

6.2.3. Library generation using NanoEdgeTM AI Studio

Running the scripts in Jupyter Notebook generates the ism330dhcx_NanoEdge_segments_0.csv csv files in each of the four data log folders. This section describes how to use these files to generate the n-class classification library with NanoEdgeTM AI Studio.

To generate the classification library the first step is to create the project by clicking on the button for nC. This modifies the display and shows what n-class classification libraries are good for and how it works. Create a new project by clicking on the CREATE NEW PROJECT button.

FP-AI-MONITOR2-nclass-sel.png

The process to generate the anomaly classification library consists of six steps as described in the figure below:

FP-AI-MONITOR2-nclass.png
  1. Project Settings
    • Choose a project name and description.
    • Hardware Configurations
      • Choosing the target STM32 microcontroller type: select STEVAL-STWINBX1 from the list provided in the drop-down menu under Target.
      • Maximum amount of RAM to be allocated for the library: Usually, a few Kbytes is enough (but it depends on the data frame length used in the process of data preparation, 32 Kbytes is a good starting point).
      • Putting a limit or no limits for the maximum flash budget
      • Sensor type: select 3-axis accelerometer from the list in the drop-down under Sensor type.
    • Click on SAVE & NEXT
  2. Providing the sample data for all four classes. Chose FROM FILE option, and provide the path to ism330dhcx_acc_NanoEdge_segments_0.csv generated using Jupyter notebook, directly
    NOTE: Assign a class label to the imported data in the Name field once the data is imported. This label is used to differentiate the classes and is used in the emulation to show the predicted class.
    • Repeat the process for all four classes.
  3. Benchmark the available models and choose the one that complies with the requirements and provides the best performance.
  4. Emulator can optionally be used to test this specific NanoEdgeTM AI Library directly inside the Studio.
  5. Validation step provides the details of the model (components under the hood), accuracies, footprints and other details for the users to understand if this is a good model for their project.
  6. The final step is to compile and download the libraries. As in FP-AI-MONITOR2, we support multiple NanoEdgeTM AI libraries, this step requires a few precise settings to work with FP-AI-MONITOR2:
    • Check the Multi-library checkbox and provide ncc in the suffix name field. (ncc = n-class classification)
    • Check the "Float abi" flag for using libraries with hardware FPU. All the other flags can be left in the default state.
Info white.png Information
For using the library with μKeil also check fshort-wchar in addition to Float abi

Detailed documentation on the NanoEdgeTM AI Studio.

6.3. X-CUBE-AI models

This section explains how to generate the code for pre-trained AI models using STM32 X-CUBE-AI. As the FP-AI-MONITOR2 runs two pre-trained AI models namely, USC and HAR, for simplification and optimization purposes, the code is generated for both pre-trained models at the same time. We can either generate the code for the combination of har_IGN.h5 and usc_4_class.tflite or har_svc.onnx and usc_4_class.tflite. These pre-trained models are available in FP-AI-MONITOR2_V1.0.0/Utilities/AI_Resources/Models directory. The package already contains the code for the HAR IGN + USC combination, and the HAR SVC + USC combination. In the section below a step-by-step process is provided for the code generation for the HAR IGN and USC combination

Step 1: Open the STM32CubeMX and click on the ACCESS TO BOARD SELECTOR button as shown in the image below.

FP AI MONITOR2 step 1 start new project.png

Step 2: Create a new project for STEVAL-STWINBX1:

  1. Confirm the board selector tab is selected,
  2. Search for the right board, and in our case, it is the STEVAL-STWINBX1, by typing the name in the search bar,
  3. Select the board by clicking on either the board image or the name of the board,
  4. Start Project, and
  5. Chose NO for the Peripheral initialization option as this is not required for our purpose.
FP AI MONITOR2 step 2 1 board select.png

and check the "without TrustZone activated" option

FP AI MONITOR2 step 2 2 trust zone.png

Step 3: Configuring hardware

Step 3.1: UART configuration

  1. Click on the connectivity option in the left menu
  2. Select USART2
  3. Enable the "Asynchronous mode" by selecting it in the drop-down menu.
  4. Check PD5 and PD6 pins are selected
FP AI MONITOR2 step 3 1 setup usart.png

Step 3.2: Power and Thermal configuration

  1. Click on the Power and Thermal option in the left menu
  2. Select PWR
  3. Enable the "SMPS" by selecting it in the drop-down menu.
FP AI MONITOR2 step 3 2 setup smps.png

Step 3.3: Memory Cache configuration

  1. Click on the System Core option in the left menu
  2. Select ICACHE
  3. Enable the "2-ways set associative cache" by selecting it in the drop-down menu.
FP AI MONITOR2 step 3 3 setup cache.png

Step 4: Add the X-CUBE-AI in the software packs for the project.

  1. Click on the Software Packs, and select the Select Components option
  2. Click on the STMicroelectronics X-CUBE-AI. Use Version 8.0.0 as preference but any version from 8.0.0 or later can be used,
  3. Click on checkbox in front of Core to add the X-CUBE-AI code in the software stack.
  4. Expand the Device Application menu and from the drop-down menu under selection column select SystemPerformance.
  5. Click OK button to finish the addition of the X-CUBE-AI in the project.
FP AI MONITOR2 step 4 select comp.png

Step 5: Platform Settings

Step 5.1: Clocking

  1. Click on Software Packs,
  2. Choose STMicroelectronics.X-CUBE-AI,
  3. Confirm clock and peripherals settings for best performance
FP AI MONITOR2 step 5 1 platform settings.png

Step 5.2: COM port

  1. Select the Platform Settings tab,
  2. Select USART: Asynchronous option for the COM Port,
  3. Select USART2 in the "Found Solutions" drop-down.
FP AI MONITOR2 step 5 2 platform settings.png

Step 6: Adding the networks for code generation.

This step requires the users to add two networks. HAR and USC. This can be done using the following steps:

  • Adding HAR model
    1. Click on the add network button,
    2. In the first text field provide the text har_network as the network name. This is used to differentiate the namespace of the model when code for multiple models is generated at once.
    3. From the drop-down select the type of network. X-CUBE-AI allows the user to generate code for the Keras, ONNX, and TFLite models. For using har_IGN.h5 file select the Keras option.
    4. Click on the Browse button to select the model.
    5. The har_IGN.h5 is located under FP-AI-MONITOR2_V1.0.0/Utilities/AI_Resources/Models.
    6. Select the model and click Select.
FP AI MONITOR2 step 6 1 har.png
  • Adding USC model
    1. Click on the add network button,
    2. In the first text field provide the text usc_network as the network name. This is used to differentiate the namespace of the model when code for multiple models is generated at once.
    3. From the drop down select the type of network as TFLite model.
    4. Click on the Browse button to select the model.
    5. The usc_4_class.tflite is located under FP-AI-MONITOR2_V1.0.0/Utilities/AI_Resources/Models.
    6. Select the model and click OK.
FP AI MONITOR2 step 6 2 USC.png

Step 7: Generating the C-code

  1. Go to Project Manager
  2. Assign a "Project Name", in the figure below it is AI-Code-Gen
  3. Choose a Project Location, in the figure below it is C:/Temp/
  4. Under Toolchain/IDE option select STM32CubeIDE (or your required toolchain), and
  5. Click on the GENERATE CODE button. A progress bar starts and shows progress across the different code generation steps.
  6. Upon completion of the code generation process the user sees a dialog as in the figure below. This shows the completion of the code generation. Click on the Open Folder button to open the project with the generated code.
  7. The files present for the network model that have to be replaced in the provided project are located in the following directory: C:/Temp/AI-Code-Gen/X-CUBE-AI/App as shown in the figure below.
FP AI MONITOR2 step 7 generate.png

The next section shows how these files are used to replace the existing model embedded in the FP-AI-MONITOR2 software step-by-step and install it on the sensor board.

7. Updating the AI models

7.1. Anomaly detection with NanoEdgeTM AI

Once the libraries are generated and downloaded from NanoEdgeTM AI Studio, the next step is to link these libraries to FP-AI-MONITOR2 and run them on the STWIN.box. The FP-AI-MONITOR2 comes with the library stubs in the place of the actual libraries generated by NanoEdgeTM AI Studio. This is done to simplify the linking of the generated libraries. In order to link the actual libraries, the user needs to copy the generated libraries and replace the existing stub/dummy libraries and header files NanoEdgeAI.h, and libneai.a files present in the folders Inc, and Lib, respectively. The relative paths of these folders are /FP-AI-MONITOR2_V1.0.0/Middlewares/ST/NanoEdge_AI_Library/.

Warning white.png Warning
If the project is being built using Keil IDE, the file libneai.a must be renamed as libneai.lib, before copying.

Once these files are copied, the project must be rebuilt and programmed on the sensor board to link the libraries correctly. For this, the user must open the project file in STM32CubeIDE located in the /FP-AI-MONITOR2_V1.0.0/Projects/STWIN.box/Applications/FP-AI-MONITOR2/STM32CubeIDE/ folder and double click .project file.

To build and install the project click on the play button and wait for the successful download message as shown in the section Build and Install Project.

Once the sensor board is successfully programmed, the welcome message appears in the CLI (Tera Term terminal). If the message does not appear, try to reset the board by pressing the RESET button.

Info white.png Information
NOTE: To be absolutely sure that new software is installed in STEVAL-STWINBX1, the user can issue command info . This shows the compile time, and the user can confirm if the sensor board is programmed with a new or old binary code.

7.1.1. Testing the anomaly detection with NanoEdgeTM AI

Once the STWIN.Box is programmed with the FW containing a valid library, the condition-monitoring libraries are ready to be tested on the sensor board. The learning and detection commands can be issued and now the user does not see the warning of the stub presence.

To achieve the best performance, the user must perform the learning using the same sensor configurations which were used during the contextual data acquisition. For example, in the snippet below users can see commands to configure ism330dhcx sensor with sensor_id 2 with the following parameters:

  • enable = 1
  • ODR = 1666,
  • FS = 4.
$ sensor_set 2 enable 1
sensor 2: enable

$ sensor_set 2 ODR 1666
nominal ODR = 1666.00 Hz, latest measured ODR = 0.00 Hz

$ sensor_set 2 FS 4
sensor FS: 4.00

Also, to get stable results in the neai_detect phase the user must perform learning for all the normal conditions to get stable results before starting the detection. In the case of running the detection phase without performing the learn first, erratic results are displayed. An indication of minimum signals to be used for learning is provided when a library is generated in the NanoEdgeTM AI Studio in benchmarking step.

Info white.png Information
Note: The NanoEdgeTM AI library comes with an incremental learning capability, so the learning and detection phases can be toggled if the user does not get good results or if new normal states are to be added. The incremental learning capability means that the new normal states are learned without forgetting the previous learning.

7.2. n-class classification with NanoEdgeTM AI

The FP-AI-MONITOR2, comes with the library stubs in the place of the actual libraries generated by NanoEdgeTM AI Studio. This is done to simplify the linking of the generated libraries. In the n-class classification library the features for the classes are learned and required in the form of the knowledge.h file in addition to .h and .a files. So unlike the anomaly detection library, in order to link the classification libraries, the user needs to copy two header files NanoEdgeAI_ncc.h and knowledge_ncc.h, and the library libneai_ncc.a file to replace the already present files in the folders Inc, and Lib, respectively. The relative paths of these folders are /FP-AI-MONITOR2_V1.0.0/Middlewares/ST/NanoEdge_AI_Library/.

Warning white.png Warning
If the project is being built using Keil IDE, the file libneai_ncc.a must be renamed as libneai_ncc.lib, before copying.

In addition to this, the user is required to add the labels of the classes to expect during the classification process by updating the sNccClassLabels array in FP-AI-MONITOR2_prj/Application/Src/AppController.c file:

This snippet is provided AS IS, and by taking it, you agree to be bound to the license terms that can be found here for the component: Application.


/**
 * Specifies the label for the two classes of the NEIA Class demo.
 */
static const char* sNccClassLabels[] = {
  "Unknown",
  "Off",
  "Normal",
  "Clogging",
  "Friction"
};

Note that the first label is "Unknown" and it is required to be left as it is. The real labels for the classification come after this. The order of the labels [Off, Normal, Clogging, Friction] has to be the same as was used to provide the data in the process of library generation in NanoEdgeTM AI Studio.

Info white.png Information
The order of the labels for the n-class classification library can be found in the NanoEdgeAI_ncc.h file. The users can simply copy and paste this array.

After this step, the project must be rebuilt and programmed on the sensor board to link the libraries correctly. To build and install the project click on the play button and wait for the successful download message as shown in the section Build and Install Project.

Once the sensor board is successfully programmed, the welcome message appears in the CLI (Tera Term terminal). If the message does not appear, try to reset the board by pressing the RESET button.

7.2.1. Testing the n-class classification with NanoEdgeTM AI

Once the STWIN.box is programmed with the FW containing a valid n-class classification library, it is ready to be tested.

To achieve the best performance, the user must perform the data logging and testing in the same sensor configurations. For example in the snippet below users can see commands to configure ism330dhcx sensor with sensor_id 2 with the following parameters:

* enable = 1
* ODR = 1666,
* FS = 4.
$ sensor_set 2 enable 1
sensor 2: enable

$ sensor_set 2 ODR 1666
nominal ODR = 1666.00 Hz, latest measured ODR = 0.00 Hz

$ sensor_set 2 FS 4
sensor FS: 4.00

After the configurations of the sensor are completed, the user can start the NEAI classification by issuing the command, $ start neai_class.

7.3. X-CUBE-AI models

The CLI example has two prebuilt models integrated with it. One for HAR and another for the USC. Following the steps below the user can update these models. As described in the code generation process, for sake of simplicity, the code for both models is generated at once. In this example, the provided code for the combination of SVC and USC models (USC_4_Class_+_SVC_4_Class) is replaced with the prebuilt and converted C code from the HAR (CNN) and USC CNN models provided in the /FP-AI-MONITOR2_V1.0.0/Utilities/AI_Resources/Models/USC_4_Class_+_IGN_4_Class/ folder.

To update the model, the user must copy and replace the following files in the /FP-AI-MONITOR2_V1.0.0/Projects/STWIN.box/Applications/FP-AI-MONITOR2/X-CUBE-AI/App/ folder:

  • app_x-cube-ai.c,
  • app_x-cube-ai.h,
  • har_network.c,
  • har_network.h,
  • har_network_config.h,
  • har_network_data.h,
  • har_network_data.c,
  • har_network_data_params.c,
  • har_network_data_params.h,
  • har_network_generate_report.txt,
  • usc_network.c,
  • usc_network.h,
  • usc_network_config.h,
  • usc_network_data.h,
  • usc_network_data.c,
  • usc_network_data_params.c,
  • usc_network_data_params.h, and
  • usc_network_generate_report.txt.

After the files are copied, the user must open the project with CubeIDE. To do so, go to the /FP-AI-MONITOR2_V1.0.0/Projects/STWIN.box/Applications/FP-AI-MONITOR2/STM32CubeIDE/ folder and double click .project file. Once the project is opened, go to FP-AI-MONITOR2_V1.0.0/Projects/STWIN.box/Application/FP-AI-MONITOR2/X-CUBE-AI/App/app_x-cube-ai.c file and comment the following lines of the code as shown below:

This snippet is provided AS IS, and by taking it, you agree to be bound to the license terms that can be found here for the component: Application.


#include "app_x-cube-ai.h"
//#include "bsp_ai.h"
//#include "aiSystemPerformance.h"
#include "ai_datatypes_defines.h"

/* USER CODE BEGIN includes */
/* USER CODE END includes */

/* IO buffers ----------------------------------------------------------------*/

//DEF_DATA_IN
//
//DEF_DATA_OUT
/* Activations buffers -------------------------------------------------------*/

AI_ALIGNED (32)
static uint8_t pool0[AI_USC_NETWORK_DATA_ACTIVATION_1_SIZE];

ai_handle data_activations0[] = {pool0};
ai_handle data_activations1[] = {pool0};

/* Entry points --------------------------------------------------------------*/

//void MX_X_CUBE_AI_Init (void)
//{
//    MX_UARTx_Init ();
//    aiSystemPerformanceInit ();
//    /* USER CODE BEGIN 5 */
//    /* USER CODE END 5 */
//}
//
//void MX_X_CUBE_AI_Process(void)
//{
//    aiSystemPerformanceProcess ();
//    HAL_Delay (1000); /* delay 1s */
//    /* USER CODE BEGIN 6 */
//    /* USER CODE END 6 */
//}

7.3.1. Building and installing the project

Then build and install the project on the STWIN sensor board by pressing the play button as shown in the figure below.

FP AI MONITOR2 build install.png

A message saying Download verified successfully indicates the new firmware is programmed in the sensor board.

Info white.png Information
NOTE: To be absolutely sure that new software is installed in STEVAL-STWINBX1, user can issue command info . This shows the compile time, and the user can confirm if the sensor board is programmed with a new or old binary.

8. FP-AI-MONITOR2 Utilities

For the ease of the users FP-AI-MONITOR2 comes with a set of utilities to record and prepare the datasets and generate the AI models for different supported AI applications. These utilities are in the FP-AI-MONITOR2_V1.0.0/Utilities/ directory and contain two sub-directories,

* AI_Resources (all the pythonTM resources related to AI, and requirements.txt containing a list of all the pythonTM dependencies), and
* DataLog (binary and helper scripts for High-Speed Datalogger).

The following section briefly describes the contents of the AI_Resources folder.

8.1. AI_Resources

FP-AI-MONITOR2 comes equipped with four AI applications. AI_Resources contains the Python™ scripts, the sample datasets, and the utility code to prepare the AI models supported in this function pack.

The AI_Resources directory, has following subdirectories:

  • Dataset: Contains different datasets/place-holders, used in the function pack,
  • AST, a small subsample from an ST proprietary dataset for HAR.
  • WISDM_ar_v1.1, a placeholder for downloading and placing the WISDM dataset for HAR.
  • HSD_Logged_Data, different datasets logged using High-speed Datalogger binary.
    • Fan12CM, Datasets logged on 12CM USB fan using different on-board sensors of STEVAL-STWINBX1.
    • HAR, Human activity recognition dataset acquired using the STEVAL-STWINBX1.
  • Models: Contains the pre-generated and trained models for HAR and USC along with their C-code
  • USC_4_Class_+_IGN_4_Class contains a combination of CNN for HAR and CNN for USC.
  • USC_4_Class_+_SVC_4_Class contains a combination of SVC for HAR and CNN for USC.
  • NanoEdgeAi: Contains the helper scripts to prepare the data for the NanoEdgeTM AI Studio to generate libraries for:
  • Anomaly detection,
  • Anomaly classification,
  • HAR
  • SignalProcessingLib: Contains the code for various signal processing modules (there is equivalent embedded code available for all the modules in the function pack)
  • TrainingScripts: Contains the Python™ scripts in two subdirectories:
  • HAR subdirectory contains the helping scripts along with two Jupyter notebooks:
    • HAR_with_CNN.ipynb (a complete step-by-step code to build a sample HAR model based on Convolutional Neural Networks), and
    • HAR_with_SVC.ipynb (a complete step-by-step code to build a sample HAR model based on Support Vector Machine Classifier).
  • UltrasoundClassification subdirectory contains helping scripts and a Jupyter notebook for preparing a CNN-based model for the condition classification using the ultrasound data logged using imp23absu.
    • UltrasoundClassification.ipynb (a complete step-by-step code to build a sample USC model based on Convolutional Neural Networks).
Info white.png Information
NOTE: The examples provided here are for the USB Fan and HAR models, but it is worth mentioning that using this pathway the readers can build any AI application or use case of their choosing. Also, the preprocessing chain can be changed to match the user's needs and requirements, as a full set of preprocessing modules is available along with equivalent C-code implementation to facilitate the users.

9. Data collection

The data collection functionality is out of the scope of this function pack, however, to simplify this for the users this section provides the step-by-step guide to logging the data on STEVAL-STWINBX1 using the high-speed data logger.

Step 1: Program the STEVAL-STWINBX1 with the HSDatalog FW

The STEVAL-STWINBX1 can be programmed by simply following the drag-and-drop action shown in here, with HSDatalog firmware using the binary file Utilities/Datalog/DATALOG2_Release.bin.

Step 2: Place a device_config.json file on the SD card.

The next step is to copy a device_config.json on the SD card. This file contains the sensor configurations to be used for the data logging. The users can simply use one of the provided samples .json files in the package in /Utilities/Datalog/STWIN.box_config_examples/ directory.

Note: The configuration files are to be precisely named device_config.json or the process does not work.

Step 3: Insert the SD card into the STWIN.Box board.

Insert an SD card in the STEVAL-STWINBX1.

Note: For data logging using the high-speed data logger, the user needs a FAT32-FS formatted microSDTM card.

Step 4: Reset the board.

Reset the board. Orange LED blinks once per second. The custom sensor configurations provided in device_config.json are loaded from the file.

Step 5: Start the data log.

Press the [USR] button to start data acquisition on the SD card. The orange LED turns off and the green LED starts blinking to signal sensor data is being written into the SD card.

Step 6: Stop the data logging.

Press the [USR] button again to stop data acquisition. Do not unplug the SD card, turn the board off or perform a [RST] before stopping the acquisition otherwise the data on the SD card are corrupted.

Step 7: Retrieve data from the SD card.

Remove the SD card and insert it into an appropriate SD card slot on the PC. The log files are stored in STWINBOX_##### folders for every acquisition, where ##### is a sequential number determined by the application to ensure log file names are unique. Each folder contains a file for each active sub-sensor called sensorName_subSensorName.dat containing raw sensor data coupled with timestamps, a device_config.json with specific information about the device configuration (confirm if the sensor configurations in the device_config.json are the ones you wanted), necessary for correct data interpretation, and an acquisition_info.json with information about the acquisition.

Info white.png Information
For details on how to use all the features of the provided DATALOG2_Release.bin binary the users are invited to refer to the user manual of FP-SNS-DATALOG2.

9.1. device_config.json file

The device_config.json file contains the configurations of all the onboard sensors of STEVAL-STWINBX1 in JSON format. The object consists of three attributes devices, schema_version, and uuid.

This snippet is provided AS IS, and by taking it, you agree to be bound to the license terms that can be found here for the component: Application.


{
    "devices": [ ... ],
  	"schema_version": "2.0.0",
	"uuid": "110c3c63-03ea-4468-83fc-b853378cfb8e"
}

devices has the information on the device. It contains the "board_id", "fw_id", "sn" and all the sensors as "components".

This snippet is provided AS IS, and by taking it, you agree to be bound to the license terms that can be found here for the component: Application.


{"devices":[{
"board_id": 14,            
"fw_id": 7,            
"sn": "004300194841500520363230",            
"components": [...]
}]}

components is an array of the information of all the on-board. Each sensor has different set of properties. Below we provide the information of ism330dhcx_acc sensor.

This snippet is provided AS IS, and by taking it, you agree to be bound to the license terms that can be found here for the component: Application.


"components": [
 {{
	"ism330dhcx_acc": {
        	"c_type": 0,
	        "data_type": "int16",
	        "dim": 3,
	        "enable": true,
	        "ep_id": 0,
	        "fs": 1,
	        "ioffset": 0.17299999296665192,
	        "measodr": 1863.6363525390625,
            "odr": 7,
	        "samples_per_ts": {
			"max": 1000,
			"min": 0,
			"val": 1000
				},
	         "sd_dps": 13312,
	         "sensitivity": 0.0001219999976456165,
	         "stream_id": 0,
	        "usb_dps": 499
	     }
	     }]

In the highlight users can see the information to be changed while logging the data with ism330dhcx_acc sensor. enable tag activate/deactivate the sensor, fs : 1 means the full_scale of the sensor is set to 4G, and odr : 7 means that the output data rate of the sensor is 1666. For details, readers are invited to refer to the user manual of FP-SNS-DATALOG2 package.

10. Documents and related resources