Skip to content

This repo uses YOLOv5 for human detection in lidar scan data. It includes code for preprocessing the data, running YOLOv5, and evaluating results.

Notifications You must be signed in to change notification settings

Qianqianxia0910/human_detection_yolo5

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Real-Time Human Detection in 3D Lidar PointCloud data using YOLOv5

Introduction

This project aims to detect human in a 3D Lidar dataset using YOLOv5. The 3D Lidar dataset is labeled using Roboflow to detect and classify the human instances in the point cloud data.

Detection results on PointCloud videos

Dataset

The data are collected with Livox Horizon lidar and saved into rosbags. By playing the rosbags, we run our models on Rviz (visulization tools) where the pointcloud are visulized. The color of each point in Rviz represents the intensity value which is detemined by the object's surface material. You can find more detatils and raw data in following links:

1.Label dataset

  • Roboflow can label, prepare, and host custom data automatically in YOLO format, and create data.yaml.

    train: ../train/images
    val: ../valid/images
    test: ../test/images
    
    nc: 1
    names: ['human']
    
    roboflow:
      workspace: project
      project: yolo_dection
      version: 1
      license: MIT
      url: https://universe.roboflow.com/project/yolo_dection/dataset/1
    

2. Select a Model

  • YOLOv5m, a medium-sized model, is selected in our projects. The different size of YOLOv5 series shows as follow:

3. Training process

3.1. Training images

  • Example training images:

3.2 train model

  • Use Single-GPU training with train.py
    !export LD_LIBRARY_PATH=/usr/local/lib64:$LD_LIBRARY_PATH
    !python3 train.py --img 640 --batch 4 --epochs 300 --data /home/qing/Desktop/SummerProject/data.yaml --cfg /media/qing/KINGSTON/2023-01-28/yolov5/models/yolov5m.yaml --weights yolov5m.pt --name yolov5s_results
    
    

All training results are saved to runs/train/ with incrementing run directories, i.e. runs/train/exp2, runs/train/exp3 etc

  • Trainning detatils

All trained results can be found here

3.3 valid results

  • We tested our trained model on test images, here we shows part of the results.

4. Video recognition

  • Objects in the video can be identified by running functions in detect.py.

    !python3 detect.py --weights  {RES_DIR}/weights/best.pt \
    --source {data_path} --name {INFER_DIR}
    
  • The weights can be downloaded from here

  • Video Results

About

This repo uses YOLOv5 for human detection in lidar scan data. It includes code for preprocessing the data, running YOLOv5, and evaluating results.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published