fix tflitemicro_person_detection

This commit is contained in:
yangqingsheng
2021-01-05 11:48:57 +08:00
parent 6bf3533c32
commit fdc39fe87f
15 changed files with 25133 additions and 23954 deletions

View File

@@ -1,329 +1,330 @@
# TensorFlow Lite Micro移植参考指南Keil版
**作者:**
Github ID: [Derekduke](https://github.com/Derekduke) E-mail: dkeji627@gmail.com
Github ID: [QingChuanWS](https://github.com/QingChuanWS) E-mail: bingshan45@163.com
Github ID: [yangqings](https://github.com/yangqings) E-mail: yangqingsheng12@outlook.com
## 概述
本教程是基于STM32 NUCLEO-L496ZGCortex-M4, 80Mhz开发板在运行TencentOS tiny的基础上使用Tensorflow Lite Micro框架和CMSIS-NN库算子加速在STM32L496ZG上实现了**行人检测模型**的推理。
关于Tensorflow Lite Micro组件的详细介绍可以参考`TencentOS-tiny\components\ai\tflite_micro`目录下的TFlite_Micro_Component_User_Guide.md文档。
本例程中传入神经网络的RGB图像大小为 18kb96*96 * 2byte在STM32L496平台消耗的内存资源经过优化后如下
- SRAM168 Kbyte
- Flash314 Kbyte
理论上满足以上内存要求的STM32 Cortex-M系列MCU可以参考本指南进行移植。
## 一、移植前的准备
#### 1. 准备目标硬件(开发板/传感器/模组)
需要准备如下硬件:
- 开发板NUCLEO-L496ZGMCU为STM32L496ZG
- Camera获取RGB图像本例程使用OV2640摄像头
- LCD显示RGB图像本例程使用2.4寸LCDSPI通信
硬件实物图如下:
<div align=center>
<img src="image/all.jpg" width=50% />
</div>
#### 2.准备TencentOS tiny基础keil工程代码
- 首先参考TencentOS tiny基于keil的移植教程进行移植
https://github.com/Tencent/TencentOS-tiny/blob/master/doc/10.Porting_Manual_for_KEIL.md
- 为了方便初始化MCU的外设后续要继续使用STM32CubeMX软件请确保安装了该软件
- 移植成功后,工程可以进行线程任务切换,通过串口打印"hello world"基础keil工程代码准备完毕。
#### 3. 获取Tensorflow Lite Micro
有三种方式获取tflite_micro
1. 从TencentOS tiny 代码仓库 `components\ai\tflite_micro`目录获取;
2. 以lib文件的形式使用tflite_micro组件lib文件`TencentOS-tiny\components\ai\tflite_micro`的ARM_CortexM4_lib、ARM_CortexM7_lib和ARM_CortexM55_lib文件夹
3. 从Tensorflow代码仓库获取TFlite_Micro的源码已经开源github仓库地址为https://github.com/tensorflow/tensorflow 可根据google TFLite Micro官方教程获得Tensorflow Lite Micro的全部源码。
如果没有tflite_micro开发经验建议以**第一种**或者**第二种**方式获取tflite_micro希望自行获取最新源码或者编译lib文件请参考`TencentOS-tiny\components\tflite_micro`目录的TFlite_Micro_Component_User_Guide.md文档本指南将直接使用TencentOS tiny 代码仓库内的tflite_micro组件。
## 二、BSP准备
### 1. 工程目录规划
以下是整个例程的目录规划:
| 一级目录 | 二级目录 | 三级目录 | 说明 |
| :-------: | :--------------------------: | :----------: | :----------------------------------------------------------: |
| arch | arm | | TencentOS tiny适配的IP核架构含M核中断、调度、tick相关代码 |
| board | NUCLEO_STM32L496ZG | | 移植目标芯片的工程文件 |
| | | BSP | 板级支持包外设驱动代码在Hardware目录 |
| component | ai | tflite_micro | tflite_micro源码 |
| examples | tflitemicro_person_detection | | 行人检测demo示例 |
| kernel | core | | TencentOS tiny内核源码 |
| | pm | | TencentOS tiny低功耗模块源码 |
| osal | cmsis_os | | TencentOS tiny提供的cmsis os 适配 |
完成TencentOS tiny基础keil工程准备工作后在这个keil工程的基础上继续添加外设驱动代码。
### 2. LCD驱动
本例程选用一款2.4寸LCD屏幕分辨率为 240*320 SPI 接口通信内部控制芯片为IL9341。
开发者也可以使用其他LCD自行完成LCD的驱动代码移植方便调试摄像头以及查看图像是否正常。
#### 2.1 SPI初始化
进入`TencentOS-tiny\board\NUCLEO_STM32L496ZG\BSP`目录打开TencentOS_tiny.ioc工程使用STM32CubeMX初始化MCU外设。
<div align=center>
<img src="./image/spi init.png" width=100% />
</div>
#### 2.2 打开keil的Manage Project Items
<div align=center>
<img src="./image/bsp_keil_manage_project.png" width=60% />
</div>
#### 2.3 在project中加入新的文件夹hal
<div align=center>
<img src="./image/bsp_添加hal.png" width=80% />
</div>
#### 2.3 添加驱动代码
添加`lcd_2inch4.c``lcd_config.c`,
<div align=center>
<img src="./image/bsp_add lcd driver file.png" width=80% />
</div>
添加头文件`lcd_2inch4.h``lcd_config.h`路径
<div align=center>
<img src="./image/bsp_include_path.png" width=80% />
</div>
<div align=center>
<img src="./image/bsp_include_lcd_path.png" width=80% />
</div>
外设驱动的头文件.h文件都在`TencentOS-tiny\board\NUCLEO_STM32L496ZG\BSP\Hardware\Inc`路径下。
### 3. 摄像头驱动
#### 3.1 外设初始化
进入`TencentOS-tiny\board\NUCLEO_STM32L496ZG\BSP`目录打开TencentOS_tiny.ioc工程初始化DCMI外设打开DCMI全局中断并打开DMA通道DMA的Direction设置为Peripheral To Memory。
<div align=center>
<img src="./image/bsp_cubemx_dcmi.png" width=100% />
</div>
<div align=center>
<img src="./image/bsp_cubemx_dcmi_2.png" width=100% />
</div>
#### 3.2 添加驱动代码
<div align=center>
<img src="./image/bsp_add camera driver file.png" width=80% />
</div>
**在mcu_init函数重写DCMI帧中断回调函数**
值得注意的是代码需要写在CubeMx生成的注释语句内当使用CubeMX重新配置外设并生成代码时所添加的代码才不会被覆盖掉如下所示代码添加在/* USER CODE BEGIN 4 */ 和 /* USER CODE END 4 */注释语句之间:
```C
/* USER CODE BEGIN 4 */
void HAL_DCMI_FrameEventCallback(DCMI_HandleTypeDef *hdcmi)
{
if(hdcmi->State == 2 && frame_flag != 1){
frame_flag = 1;
}
}
/* USER CODE END 4 */
```
### 4. LCD显示摄像头图像
本例程的任务函数在
`TencentOS-tiny\examples\tflitemicro_person_detection\tflitemicro_person_detection.c`
```c
void task1(void *arg)
{
while (1) {
if(frame_flag == 1){
if(HAL_DCMI_Stop(&hdcmi))Error_Handler(); //stop DCMI
LCD_2IN4_Display(camera_buffer,OV2640_PIXEL_WIDTH,OV2640_PIXEL_HEIGHT);
//display
frame_flag = 0;
if(HAL_DCMI_Start_DMA(&hdcmi,DCMI_MODE_CONTINUOUS,\ //restart DCMI
(uint32_t)camera_buffer ,\
(OV2640_PIXEL_WIDTH*OV2640_PIXEL_HEIGHT)/2))
Error_Handler();
osDelay(50);
}
}
```
经过以上步骤如果能顺利地驱动摄像头并在LCD实时显示图像BSP就准备完毕了如果使用的是不同的LCD或者Camera请根据实际情况进行外设初始化和驱动的移植。
## 三、Tensorflow Lite Micro移植
### 1. tflite_micro组件加入到keil工程
由于NUCLEO-L496ZG芯片中的内核为ARM Cortex M4所以本次我们可以直接使用ARM Cortex M4版本的tensorflow_lite_micro.lib库来简化tflite_micro搭建流程。
#### 1.1 在project中加入新的文件夹tensorflow
<div align=center>
<img src="./image/tflu_tensorflow文件夹增加的内容.png" width=80% />
</div>
#### 1.2 添加本次与行人检测demo有关的源文件
<div align=center>
<img src="./image/tflu_需要添加的文件.png" width=80% />
</div>
其中retarget.c的路径为`TencentOS-tiny\components\ai\tflite_micro\KEIL\retarget.c`
tensorflow_lite_micro.lib的路径为`TencentOS-tiny\components\ai\tflite_micro\ARM_CortexM4_lib\tensorflow_lite_micro.lib`
其余.cc文件均在当前目录下的`tflu_person_detection`文件夹中。
#### 1.3 关闭Keil的MicroLib库
<div align=center>
<img src="./image/tflu_取消Microlib.png" width=80% />
</div>
#### 1.4 添加tflite_micro需要的头文件
<div align=center>
<img src="./image/tflu_添加include.png" width=80% />
</div>
注:最下方的路径为:
```
TencentOS-tiny\components\ai\tflite_micro\ARM_CortexM4_lib\tensorflow\lite\micro\tools\make\downloads
```
#### 1.5 调整优化等级和tflite_micro的交互信息输出串口
<div align=center>
<img src="./image/tflu_STM32496宏.png" width=80% />
</div>
其中宏`NUCLEO_STM32L496ZG`是指定Nucleo STM32L496的hlpuart1为系统printf函数的输出串口具体定义在Nucleo STM32L496的BSP文件夹中的`mcu_init.c`中。
### 2. 编写Person_Detection 任务函数
本例程的任务函数在
`TencentOS-tiny\examples\tflitemicro_person_detection\tflitemicro_person_detection.c`目录下
#### 2.1 图像预处理
<div align=center>
<img src="./image/RGB565.jpg" width=50% />
</div>
在本例程中模型要求输入神经网络的图像为灰度图为完成摄像头获取的RGB彩图到模型输入需要的灰度图转换需从输入的RGB565像素格式中解析出R、G、B三通道的值再根据心理学公式计算出单个像素点的灰度具体代码如下
```c
uint8_t rgb565_to_gray(uint16_t bg_color)
{
uint8_t bg_r = 0;
uint8_t bg_g = 0;
uint8_t bg_b = 0;
bg_r = ((bg_color>>11)&0xff)<<3;
bg_g = ((bg_color>>5)&0x3f)<<2;
bg_b = (bg_color&0x1f)<<2;
uint8_t gray = (bg_r*299 + bg_g*587 + bg_b*114 + 500) / 1000;
return gray;
}
void input_convert(uint16_t* camera_buffer , uint8_t* model_buffer)
{
for(int i=0 ; i<OV2640_PIXEL_WIDTH*OV2640_PIXEL_HEIGHT ; i++)
{
model_buffer[i] = rgb565_to_gray(camera_buffer[i]);
}
}
```
#### 2.2 行人检测线程任务函数
```c
void task1(void *arg)
{
while (1) {
if(frame_flag == 1){
printf("***person detection task\r\n");
if(HAL_DCMI_Stop(&hdcmi))Error_Handler(); //stop DCMI
input_convert(camera_buffer,model_buffer);//convert input
person_detect(model_buffer); //inference
LCD_2IN4_Display(camera_buffer,OV2640_PIXEL_WIDTH,OV2640_PIXEL_HEIGHT);
//display
frame_flag = 0;
if(HAL_DCMI_Start_DMA(&hdcmi,DCMI_MODE_CONTINUOUS,\ //restart DCMI
(uint32_t)camera_buffer ,\
(OV2640_PIXEL_WIDTH*OV2640_PIXEL_HEIGHT)/2))
Error_Handler();
}
osDelay(50);
}
}
void task2(void *arg)
{
while (1) {
printf("***task2\r\n");
osDelay(50);
}
}
```
#### 2.3 运行效果
通过串行输出实时打印信息,移动摄像头,镜头没有对准行人时,输出如下:
<div align=center>
<img src="./image/reasult_no_person.png" width=70% />
</div>
当镜头对准行人时,输出如下:
<div align=center>
<img src="./image/reasult_person.png" width=70% />
</div>
执行一帧图像推理耗时约633 ms。
更多关于tflite_micro的介绍请参考[tensorflow](https://tensorflow.google.cn/lite/microcontrollers?hl=zh_cn)官网以及`TencentOS-tiny\components\tflite_micro`目录的TFlite_Micro_Component_User_Guide.md
# TensorFlow Lite Micro移植参考指南Keil版
**作者:**
Github: [Derekduke](https://github.com/Derekduke) E-mail: dkeji627@gmail.com
Github: [QingChuanWS](https://github.com/QingChuanWS) E-mail: bingshan45@163.com
Github: [yangqings](https://github.com/yangqings) E-mail: yangqingsheng12@outlook.com
## 概述
本教程是基于STM32 NUCLEO-L496ZGCortex-M4, 80Mhz开发板在运行TencentOS tiny的基础上使用Tensorflow Lite Micro框架和CMSIS-NN库算子加速在STM32L496ZG上实现了**行人检测模型**的推理。
关于Tensorflow Lite Micro组件的详细介绍可以参考`TencentOS-tiny\components\ai\tflite_micro`目录下的TFlite_Micro_Component_User_Guide.md文档。
本例程中传入神经网络的RGB图像大小为 18kb96*96 * 2byte在STM32L496平台消耗的内存资源经过优化后如下
- SRAM168 Kbyte
- Flash314 Kbyte
理论上满足以上内存要求的STM32 Cortex-M系列MCU可以参考本指南进行移植。
## 一、移植前的准备
#### 1. 准备目标硬件(开发板/传感器/模组)
需要准备如下硬件:
- 开发板NUCLEO-L496ZGMCU为STM32L496ZG
- Camera获取RGB图像本例程使用OV2640摄像头
- LCD显示RGB图像本例程使用2.4寸LCDSPI通信
硬件实物图如下:
<div align=center>
<img src="image/all.jpg" width=50% />
</div>
#### 2.准备TencentOS tiny基础keil工程代码
- 首先参考TencentOS tiny基于keil的移植教程进行移植
https://github.com/Tencent/TencentOS-tiny/blob/master/doc/10.Porting_Manual_for_KEIL.md
- 为了方便初始化MCU的外设后续要继续使用STM32CubeMX软件请确保安装了该软件
- 移植成功后,工程可以进行线程任务切换,通过串口打印"hello world"基础keil工程代码准备完毕。
#### 3. 获取Tensorflow Lite Micro
有三种方式获取tflite_micro
1. 从TencentOS tiny 代码仓库 `components\ai\tflite_micro`目录获取;
2. 以lib文件的形式使用tflite_micro组件lib文件`TencentOS-tiny\components\ai\tflite_micro`的ARM_CortexM4_lib、ARM_CortexM7_lib和ARM_CortexM55_lib文件夹
3. 从Tensorflow代码仓库获取TFlite_Micro的源码已经开源github仓库地址为https://github.com/tensorflow/tensorflow 可根据google TFLite Micro官方教程获得Tensorflow Lite Micro的全部源码。
如果没有tflite_micro开发经验建议以**第一种**或者**第二种**方式获取tflite_micro希望自行获取最新源码或者编译lib文件请参考`TencentOS-tiny\components\tflite_micro`目录的TFlite_Micro_Component_User_Guide.md文档本指南将直接使用TencentOS tiny 代码仓库内的tflite_micro组件。
## 二、BSP准备
### 1. 工程目录规划
以下是整个例程的目录规划:
| 一级目录 | 二级目录 | 三级目录 | 说明 |
| :-------: | :--------------------------: | :-------------------: | :----------------------------------------------------------: |
| arch | arm | | TencentOS tiny适配的IP核架构含M核中断、调度、tick相关代码 |
| board | NUCLEO_STM32L496ZG | | 移植目标芯片的工程文件 |
| | | BSP | 板级支持包外设驱动代码在Hardware目录 |
| component | ai | tflite_micro | tflite_micro源码及有关库文件 |
| examples | tflitemicro_person_detection | | 行人检测demo示例 |
| | | tflu_person_detection | 行人检测实例代码 |
| kernel | core | | TencentOS tiny内核源码 |
| | pm | | TencentOS tiny低功耗模块源码 |
| osal | cmsis_os | | TencentOS tiny提供的cmsis os 适配 |
完成TencentOS tiny基础keil工程准备工作后在这个keil工程的基础上继续添加外设驱动代码。
### 2. LCD驱动
本例程选用一款2.4寸LCD屏幕分辨率为 240*320 SPI 接口通信内部控制芯片为IL9341。
开发者也可以使用其他LCD自行完成LCD的驱动代码移植方便调试摄像头以及查看图像是否正常。
#### 2.1 SPI初始化
进入`TencentOS-tiny\board\NUCLEO_STM32L496ZG\BSP`目录打开TencentOS_tiny.ioc工程使用STM32CubeMX初始化MCU外设。
<div align=center>
<img src="./image/spi init.png" width=100% />
</div>
#### 2.2 打开keil的Manage Project Items
<div align=center>
<img src="./image/bsp_keil_manage_project.png" width=60% />
</div>
#### 2.3 在project中加入新的文件夹hal
<div align=center>
<img src="./image/bsp_添加hal.png" width=80% />
</div>
#### 2.3 添加驱动代码
添加`lcd_2inch4.c``lcd_config.c`,
<div align=center>
<img src="./image/bsp_add lcd driver file.png" width=80% />
</div>
添加头文件`lcd_2inch4.h``lcd_config.h`路径
<div align=center>
<img src="./image/bsp_include_path.png" width=80% />
</div>
<div align=center>
<img src="./image/bsp_include_lcd_path.png" width=80% />
</div>
外设驱动的头文件.h文件都在`TencentOS-tiny\board\NUCLEO_STM32L496ZG\BSP\Hardware\Inc`路径下。
### 3. 摄像头驱动
#### 3.1 外设初始化
进入`TencentOS-tiny\board\NUCLEO_STM32L496ZG\BSP`目录打开TencentOS_tiny.ioc工程初始化DCMI外设打开DCMI全局中断并打开DMA通道DMA的Direction设置为Peripheral To Memory。
<div align=center>
<img src="./image/bsp_cubemx_dcmi.png" width=100% />
</div>
<div align=center>
<img src="./image/bsp_cubemx_dcmi_2.png" width=100% />
</div>
#### 3.2 添加驱动代码
<div align=center>
<img src="./image/bsp_add camera driver file.png" width=80% />
</div>
**在mcu_init函数重写DCMI帧中断回调函数**
值得注意的是代码需要写在CubeMx生成的注释语句内当使用CubeMX重新配置外设并生成代码时所添加的代码才不会被覆盖掉如下所示代码添加在/* USER CODE BEGIN 4 */ 和 /* USER CODE END 4 */注释语句之间:
```C
/* USER CODE BEGIN 4 */
void HAL_DCMI_FrameEventCallback(DCMI_HandleTypeDef *hdcmi)
{
if(hdcmi->State == 2 && frame_flag != 1){
frame_flag = 1;
}
}
/* USER CODE END 4 */
```
### 4. LCD显示摄像头图像
本例程的任务函数在
`TencentOS-tiny\examples\tflitemicro_person_detection\tflitemicro_person_detection.c`
```c
void task1(void *arg)
{
while (1) {
if(frame_flag == 1){
if(HAL_DCMI_Stop(&hdcmi))Error_Handler(); //stop DCMI
LCD_2IN4_Display(camera_buffer,OV2640_PIXEL_WIDTH,OV2640_PIXEL_HEIGHT);
//display
frame_flag = 0;
if(HAL_DCMI_Start_DMA(&hdcmi,DCMI_MODE_CONTINUOUS,\ //restart DCMI
(uint32_t)camera_buffer ,\
(OV2640_PIXEL_WIDTH*OV2640_PIXEL_HEIGHT)/2))
Error_Handler();
osDelay(50);
}
}
```
经过以上步骤如果能顺利地驱动摄像头并在LCD实时显示图像BSP就准备完毕了如果使用的是不同的LCD或者Camera请根据实际情况进行外设初始化和驱动的移植。
## 三、Tensorflow Lite Micro移植
### 1. tflite_micro组件加入到keil工程
由于NUCLEO-L496ZG芯片中的内核为ARM Cortex M4所以本次我们可以直接使用ARM Cortex M4版本的tensorflow_lite_micro.lib库来简化tflite_micro搭建流程。
#### 1.1 在project中加入新的文件夹tensorflow
<div align=center>
<img src="./image/tflu_tensorflow文件夹增加的内容.png" width=80% />
</div>
#### 1.2 添加本次与行人检测demo有关的源文件
<div align=center>
<img src="./image/tflu_需要添加的文件.png" width=80% />
</div>
其中retarget.c的路径为`TencentOS-tiny\components\ai\tflite_micro\KEIL\retarget.c`
tensorflow_lite_micro.lib的路径为`TencentOS-stiny\components\ai\tflite_micro\ARM_CortexM4_lib\tensorflow_lite_micro.lib`
其余.cc文件和.h均在`examples\tflu_person_detection\tflu_person_detection`文件夹中。
#### 1.3 关闭Keil的MicroLib库
<div align=center>
<img src="./image/tflu_取消Microlib.png" width=80% />
</div>
#### 1.4 添加tflite_micro需要的头文件
<div align=center>
<img src="./image/tflu_添加include.png" width=80% />
</div>
注:最下方的路径为:
```
TencentOS-tiny\components\ai\tflite_micro\ARM_CortexM4_lib\tensorflow\lite\micro\tools\make\downloads
```
#### 1.5 调整优化等级和tflite_micro的交互信息输出串口
<div align=center>
<img src="./image/tflu_STM32496宏.png" width=80% />
</div>
其中宏`NUCLEO_STM32L496ZG`是指定Nucleo STM32L496的hlpuart1为系统printf函数的输出串口具体定义在Nucleo STM32L496的BSP文件夹中的`mcu_init.c`中。
### 2. 编写Person_Detection 任务函数
本例程的任务函数在
`TencentOS-tiny\examples\tflitemicro_person_detection\tflitemicro_person_detection.c`
#### 2.1 图像预处理
<div align=center>
<img src="./image/RGB565.jpg" width=50% />
</div>
在本例程中模型要求输入神经网络的图像为灰度图为完成摄像头获取的RGB彩图到模型输入需要的灰度图转换需从输入的RGB565像素格式中解析出R、G、B三通道的值再根据心理学公式计算出单个像素点的灰度具体代码如下
```c
uint8_t rgb565_to_gray(uint16_t bg_color)
{
uint8_t bg_r = 0;
uint8_t bg_g = 0;
uint8_t bg_b = 0;
bg_r = ((bg_color>>11)&0xff)<<3;
bg_g = ((bg_color>>5)&0x3f)<<2;
bg_b = (bg_color&0x1f)<<2;
uint8_t gray = (bg_r*299 + bg_g*587 + bg_b*114 + 500) / 1000;
return gray;
}
void input_convert(uint16_t* camera_buffer , uint8_t* model_buffer)
{
for(int i=0 ; i<OV2640_PIXEL_WIDTH*OV2640_PIXEL_HEIGHT ; i++)
{
model_buffer[i] = rgb565_to_gray(camera_buffer[i]);
}
}
```
#### 2.2 行人检测线程任务函数
```c
void task1(void *arg)
{
while (1) {
if(frame_flag == 1){
printf("***person detection task\r\n");
if(HAL_DCMI_Stop(&hdcmi))Error_Handler(); //stop DCMI
input_convert(camera_buffer,model_buffer);//convert input
person_detect(model_buffer); //inference
LCD_2IN4_Display(camera_buffer,OV2640_PIXEL_WIDTH,OV2640_PIXEL_HEIGHT);
//display
frame_flag = 0;
if(HAL_DCMI_Start_DMA(&hdcmi,DCMI_MODE_CONTINUOUS,\ //restart DCMI
(uint32_t)camera_buffer ,\
(OV2640_PIXEL_WIDTH*OV2640_PIXEL_HEIGHT)/2))
Error_Handler();
}
osDelay(50);
}
}
void task2(void *arg)
{
while (1) {
printf("***task2\r\n");
osDelay(50);
}
}
```
#### 2.3 运行效果
通过串行输出实时打印信息,移动摄像头,没有对准行人时,输出如下:
<div align=center>
<img src="./image/reasult_no_person.png" width=70% />
</div>
当摄像头对准行人时,输出如下:
<div align=center>
<img src="./image/reasult_person.png" width=70% />
</div>
执行一帧图像推理耗时约633 ms。
更多关于tflite_micro的介绍请参考[tensorflow](https://tensorflow.google.cn/lite/microcontrollers?hl=zh_cn)官网以及`TencentOS-tiny\components\tflite_micro`目录的TFlite_Micro_Component_User_Guide.md

View File

@@ -1,23 +0,0 @@
## TencentOS-tiny_Person_Detection_Demo
### 1. 目录结构:
- TencentOS-tiny\board\NUCLEO_STM32L496ZG\BSP\Hardware : **外设驱动代码**
- TencentOS-tiny\examples\tflitemicro_person_detection : **Demo任务函数**
- TencentOS-tiny\board\NUCLEO_STM32L496ZG\KEIL\tflitemicro_person_detection : **keil工程**
- TencentOS-tiny\components\tflite_micro\tensorflow : **tflite_micro代码**
### 2. 完成的工作:
- 使用STM32CubeMX选择与TOS同版本的固件库重新生成外设初始化代码
- TOS、摄像头和LCD工作都正常工作
- tflite_micro 以component的形式加到工程
- retarget.c引入工程并通过宏进行选择
- example中行人检测demo已经可以正常工作
### 3. 未完成的工作:
- 变量名、函数名还没有按照TOS的风格完全统一
- keil移植指南
- tflite_micro用户指南

File diff suppressed because it is too large Load Diff

View File

@@ -137,7 +137,7 @@
<DriverSelection>4107</DriverSelection>
</Flash1>
<bUseTDR>1</bUseTDR>
<Flash2>STLink\ST-LINKIII-KEIL_SWO.dll</Flash2>
<Flash2>BIN\UL2CM3.DLL</Flash2>
<Flash3></Flash3>
<Flash4></Flash4>
<pFcarmOut></pFcarmOut>
@@ -339,7 +339,7 @@
<MiscControls></MiscControls>
<Define>USE_HAL_DRIVER,STM32L496xx,NUCLEO_STM32L496ZG</Define>
<Undefine></Undefine>
<IncludePath>..\..\BSP\Inc;..\..\..\..\platform\vendor_bsp\st\STM32L4xx_HAL_Driver\Inc;..\..\..\..\platform\vendor_bsp\st\STM32L4xx_HAL_Driver\Inc\Legacy;..\..\..\..\platform\vendor_bsp\st\CMSIS\Device\ST\STM32L4xx\Include;..\..\..\..\platform\vendor_bsp\st\CMSIS\Include;..\..\..\..\arch\arm\arm-v7m\common\include;..\..\..\..\arch\arm\arm-v7m\cortex-m4\armcc;..\..\..\..\kernel\core\include;..\..\..\..\kernel\pm\include;..\..\..\..\osal\cmsis_os;..\..\..\..\examples\hello_world;..\..\TOS_CONFIG;..\..\..\..\net\at\include;..\..\..\..\kernel\hal\include;..\..\BSP\Hardware\Inc;..\..\..\..\components\ai\tflite_micro\ARM_CortexM4_lib;..\..\..\..\components\ai\tflite_micro\ARM_CortexM4_lib\third_party\flatbuffers\include;..\..\..\..\components\ai\tflite_micro\ARM_CortexM4_lib\third_party\gemmlowp;..\..\..\..\components\ai\tflite_micro\ARM_CortexM4_lib\third_party\kissfft;..\..\..\..\components\ai\tflite_micro\ARM_CortexM4_lib\third_party\ruy;..\..\..\..\components\ai\tflite_micro\ARM_CortexM4_lib\tensorflow\lite\micro\tools\make\downloads</IncludePath>
<IncludePath>..\..\BSP\Inc;..\..\..\..\platform\vendor_bsp\st\STM32L4xx_HAL_Driver\Inc;..\..\..\..\platform\vendor_bsp\st\STM32L4xx_HAL_Driver\Inc\Legacy;..\..\..\..\platform\vendor_bsp\st\CMSIS\Device\ST\STM32L4xx\Include;..\..\..\..\platform\vendor_bsp\st\CMSIS\Include;..\..\..\..\arch\arm\arm-v7m\common\include;..\..\..\..\arch\arm\arm-v7m\cortex-m4\armcc;..\..\..\..\kernel\core\include;..\..\..\..\kernel\pm\include;..\..\..\..\osal\cmsis_os;..\..\..\..\examples\hello_world;..\..\TOS_CONFIG;..\..\..\..\net\at\include;..\..\..\..\kernel\hal\include;..\..\BSP\Hardware\Inc;..\..\..\..\components\ai\tflite_micro\ARM_CortexM4_lib;..\..\..\..\components\ai\tflite_micro\ARM_CortexM4_lib\third_party\flatbuffers\include;..\..\..\..\components\ai\tflite_micro\ARM_CortexM4_lib\third_party\gemmlowp;..\..\..\..\components\ai\tflite_micro\ARM_CortexM4_lib\third_party\kissfft;..\..\..\..\components\ai\tflite_micro\ARM_CortexM4_lib\third_party\ruy;..\..\..\..\components\ai\tflite_micro\ARM_CortexM4_lib\tensorflow\lite\micro\tools\make\downloads;..\..\..\..\examples\tflitemicro_person_detection\tflu_person_detection</IncludePath>
</VariousControls>
</Cads>
<Aads>
@@ -778,31 +778,6 @@
<Group>
<GroupName>tensorflow</GroupName>
<Files>
<File>
<FileName>person_detect_model_data.cc</FileName>
<FileType>8</FileType>
<FilePath>.\tflu_person_detection\person_detect_model_data.cc</FilePath>
</File>
<File>
<FileName>model_settings.cc</FileName>
<FileType>8</FileType>
<FilePath>.\tflu_person_detection\model_settings.cc</FilePath>
</File>
<File>
<FileName>main_functions.cc</FileName>
<FileType>8</FileType>
<FilePath>.\tflu_person_detection\main_functions.cc</FilePath>
</File>
<File>
<FileName>image_provider.cc</FileName>
<FileType>8</FileType>
<FilePath>.\tflu_person_detection\image_provider.cc</FilePath>
</File>
<File>
<FileName>detection_responder.cc</FileName>
<FileType>8</FileType>
<FilePath>.\tflu_person_detection\detection_responder.cc</FilePath>
</File>
<File>
<FileName>retarget.c</FileName>
<FileType>1</FileType>
@@ -813,6 +788,31 @@
<FileType>4</FileType>
<FilePath>..\..\..\..\components\ai\tflite_micro\ARM_CortexM4_lib\tensorflow_lite_micro_M4.lib</FilePath>
</File>
<File>
<FileName>detection_responder.cc</FileName>
<FileType>8</FileType>
<FilePath>..\..\..\..\examples\tflitemicro_person_detection\tflu_person_detection\detection_responder.cc</FilePath>
</File>
<File>
<FileName>image_provider.cc</FileName>
<FileType>8</FileType>
<FilePath>..\..\..\..\examples\tflitemicro_person_detection\tflu_person_detection\image_provider.cc</FilePath>
</File>
<File>
<FileName>main_functions.cc</FileName>
<FileType>8</FileType>
<FilePath>..\..\..\..\examples\tflitemicro_person_detection\tflu_person_detection\main_functions.cc</FilePath>
</File>
<File>
<FileName>model_settings.cc</FileName>
<FileType>8</FileType>
<FilePath>..\..\..\..\examples\tflitemicro_person_detection\tflu_person_detection\model_settings.cc</FilePath>
</File>
<File>
<FileName>person_detect_model_data.cc</FileName>
<FileType>8</FileType>
<FilePath>..\..\..\..\examples\tflitemicro_person_detection\tflu_person_detection\person_detect_model_data.cc</FilePath>
</File>
</Files>
</Group>
<Group>

View File

@@ -1,25 +0,0 @@
/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#include "tensorflow/lite/micro/examples/person_detection_experimental/detection_responder.h"
// This dummy implementation writes person and no person scores to the error
// console. Real applications will want to take some custom action instead, and
// should implement their own versions of this function.
void RespondToDetection(tflite::ErrorReporter* error_reporter,
int8_t person_score, int8_t no_person_score) {
TF_LITE_REPORT_ERROR(error_reporter, "person score:%d no person score %d",
person_score, no_person_score);
}

View File

@@ -1,34 +0,0 @@
/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
// Provides an interface to take an action based on the output from the person
// detection model.
#ifndef TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_EXPERIMENTAL_DETECTION_RESPONDER_H_
#define TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_EXPERIMENTAL_DETECTION_RESPONDER_H_
#include "tensorflow/lite/c/common.h"
#include "tensorflow/lite/micro/micro_error_reporter.h"
// Called every time the results of a person detection run are available. The
// `person_score` has the numerical confidence that the captured image contains
// a person, and `no_person_score` has the numerical confidence that the image
// does not contain a person. Typically if person_score > no person score, the
// image is considered to contain a person. This threshold may be adjusted for
// particular applications.
void RespondToDetection(tflite::ErrorReporter* error_reporter,
int8_t person_score, int8_t no_person_score);
#endif // TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_EXPERIMENTAL_DETECTION_RESPONDER_H_

View File

@@ -1,26 +0,0 @@
/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#include "tensorflow/lite/micro/examples/person_detection_experimental/image_provider.h"
#include "tensorflow/lite/micro/examples/person_detection_experimental/model_settings.h"
TfLiteStatus GetImage(tflite::ErrorReporter* error_reporter, int image_width,
int image_height, int channels, int8_t* image_data,
uint8_t * hardware_input) {
for (int i = 0; i < image_width * image_height * channels; ++i) {
image_data[i] = hardware_input[i];
}
return kTfLiteOk;
}

View File

@@ -1,40 +0,0 @@
/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#ifndef TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_EXPERIMENTAL_IMAGE_PROVIDER_H_
#define TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_EXPERIMENTAL_IMAGE_PROVIDER_H_
#include "tensorflow/lite/c/common.h"
#include "tensorflow/lite/micro/micro_error_reporter.h"
// This is an abstraction around an image source like a camera, and is
// expected to return 8-bit sample data. The assumption is that this will be
// called in a low duty-cycle fashion in a low-power application. In these
// cases, the imaging sensor need not be run in a streaming mode, but rather can
// be idled in a relatively low-power mode between calls to GetImage(). The
// assumption is that the overhead and time of bringing the low-power sensor out
// of this standby mode is commensurate with the expected duty cycle of the
// application. The underlying sensor may actually be put into a streaming
// configuration, but the image buffer provided to GetImage should not be
// overwritten by the driver code until the next call to GetImage();
//
// The reference implementation can have no platform-specific dependencies, so
// it just returns a static image. For real applications, you should
// ensure there's a specialized implementation that accesses hardware APIs.
TfLiteStatus GetImage(tflite::ErrorReporter* error_reporter, int image_width,
int image_height, int channels, int8_t* image_data,
uint8_t * hardware_input);
#endif // TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_EXPERIMENTAL_IMAGE_PROVIDER_H_

View File

@@ -1,119 +0,0 @@
/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#include "tensorflow/lite/micro/examples/person_detection_experimental/main_functions.h"
#include "tensorflow/lite/micro/examples/person_detection_experimental/detection_responder.h"
#include "tensorflow/lite/micro/examples/person_detection_experimental/image_provider.h"
#include "tensorflow/lite/micro/examples/person_detection_experimental/model_settings.h"
#include "tensorflow/lite/micro/examples/person_detection_experimental/person_detect_model_data.h"
#include "tensorflow/lite/micro/micro_error_reporter.h"
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/micro/micro_mutable_op_resolver.h"
#include "tensorflow/lite/schema/schema_generated.h"
#include "tensorflow/lite/version.h"
// Globals, used for compatibility with Arduino-style sketches.
namespace {
tflite::ErrorReporter* error_reporter = nullptr;
const tflite::Model* model = nullptr;
tflite::MicroInterpreter* interpreter = nullptr;
TfLiteTensor* input = nullptr;
// In order to use optimized tensorflow lite kernels, a signed int8_t quantized
// model is preferred over the legacy unsigned model format. This means that
// throughout this project, input images must be converted from unisgned to
// signed format. The easiest and quickest way to convert from unsigned to
// signed 8-bit integers is to subtract 128 from the unsigned value to get a
// signed value.
// An area of memory to use for input, output, and intermediate arrays.
constexpr int kTensorArenaSize = 115 * 1024;
static uint8_t tensor_arena[kTensorArenaSize];
} // namespace
// The name of this function is important for Arduino compatibility.
void person_detect_init() {
// Set up logging. Google style is to avoid globals or statics because of
// lifetime uncertainty, but since this has a trivial destructor it's okay.
// NOLINTNEXTLINE(runtime-global-variables)
static tflite::MicroErrorReporter micro_error_reporter;
error_reporter = &micro_error_reporter;
// Map the model into a usable data structure. This doesn't involve any
// copying or parsing, it's a very lightweight operation.
model = tflite::GetModel(g_person_detect_model_data);
if (model->version() != TFLITE_SCHEMA_VERSION) {
TF_LITE_REPORT_ERROR(error_reporter,
"Model provided is schema version %d not equal "
"to supported version %d.",
model->version(), TFLITE_SCHEMA_VERSION);
return;
}
// Pull in only the operation implementations we need.
// This relies on a complete list of all the ops needed by this graph.
// An easier approach is to just use the AllOpsResolver, but this will
// incur some penalty in code space for op implementations that are not
// needed by this graph.
//
// tflite::AllOpsResolver resolver;
// NOLINTNEXTLINE(runtime-global-variables)
static tflite::MicroMutableOpResolver<5> micro_op_resolver;
micro_op_resolver.AddAveragePool2D();
micro_op_resolver.AddConv2D();
micro_op_resolver.AddDepthwiseConv2D();
micro_op_resolver.AddReshape();
micro_op_resolver.AddSoftmax();
// Build an interpreter to run the model with.
// NOLINTNEXTLINE(runtime-global-variables)
static tflite::MicroInterpreter static_interpreter(
model, micro_op_resolver, tensor_arena, kTensorArenaSize, error_reporter);
interpreter = &static_interpreter;
// Allocate memory from the tensor_arena for the model's tensors.
TfLiteStatus allocate_status = interpreter->AllocateTensors();
if (allocate_status != kTfLiteOk) {
TF_LITE_REPORT_ERROR(error_reporter, "AllocateTensors() failed");
return;
}
// Get information about the memory area to use for the model's input.
input = interpreter->input(0);
}
// The name of this function is important for Arduino compatibility.
int person_detect(uint8_t * hardware_input) {
// Get image from provider.
if (kTfLiteOk != GetImage(error_reporter, kNumCols, kNumRows, kNumChannels,
input->data.int8, hardware_input)) {
TF_LITE_REPORT_ERROR(error_reporter, "Image capture failed.");
}
// Run the model on this input and make sure it succeeds.
if (kTfLiteOk != interpreter->Invoke()) {
TF_LITE_REPORT_ERROR(error_reporter, "Invoke failed.");
}
TfLiteTensor* output = interpreter->output(0);
// Process the inference results.
int8_t person_score = output->data.uint8[kPersonIndex];
int8_t no_person_score = output->data.uint8[kNotAPersonIndex];
RespondToDetection(error_reporter, person_score, no_person_score);
if(person_score >= no_person_score + 50) return 1;
else return 0;
}

View File

@@ -1,30 +0,0 @@
/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#ifndef TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_EXPERIMENTAL_MAIN_FUNCTIONS_H_
#define TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_EXPERIMENTAL_MAIN_FUNCTIONS_H_
#include "tensorflow/lite/c/common.h"
// Initializes all data needed for the example. The name is important, and needs
// to be setup() for Arduino compatibility.
extern "C" void person_detect_init();
// Runs one iteration of data gathering and inference. This should be called
// repeatedly from the application code. The name needs to be loop() for Arduino
// compatibility.
extern "C" int person_detect(uint8_t * hardware_input);
#endif // TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_EXPERIMENTAL_MAIN_FUNCTIONS_H_

View File

@@ -1,21 +0,0 @@
/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#include "tensorflow/lite/micro/examples/person_detection_experimental/model_settings.h"
const char* kCategoryLabels[kCategoryCount] = {
"notperson",
"person",
};

View File

@@ -1,35 +0,0 @@
/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#ifndef TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_EXPERIMENTAL_MODEL_SETTINGS_H_
#define TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_EXPERIMENTAL_MODEL_SETTINGS_H_
// Keeping these as constant expressions allow us to allocate fixed-sized arrays
// on the stack for our working memory.
// All of these values are derived from the values used during model training,
// if you change your model you'll need to update these constants.
constexpr int kNumCols = 96;
constexpr int kNumRows = 96;
constexpr int kNumChannels = 1;
constexpr int kMaxImageSize = kNumCols * kNumRows * kNumChannels;
constexpr int kCategoryCount = 2;
constexpr int kPersonIndex = 1;
constexpr int kNotAPersonIndex = 0;
extern const char* kCategoryLabels[kCategoryCount];
#endif // TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_EXPERIMENTAL_MODEL_SETTINGS_H_

View File

@@ -1,27 +0,0 @@
/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
// This is a standard TensorFlow Lite model file that has been converted into a
// C data array, so it can be easily compiled into a binary for devices that
// don't have a file system. It was created using the command:
// xxd -i person_detect.tflite > person_detect_model_data.cc
#ifndef TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_EXPERIMENTAL_PERSON_DETECT_MODEL_DATA_H_
#define TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_EXPERIMENTAL_PERSON_DETECT_MODEL_DATA_H_
extern const unsigned char g_person_detect_model_data[];
extern const int g_person_detect_model_data_len;
#endif // TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_EXPERIMENTAL_PERSON_DETECT_MODEL_DATA_H_