Skip to content
/ lanedet Public

An open source lane detection toolbox based on PyTorch, including SCNN, RESA, UFLD, LaneATT, CondLane, etc.

License

Notifications You must be signed in to change notification settings

Turoad/lanedet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LaneDet

Introduction

LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models. Developers can reproduce these SOTA methods and build their own methods.

demo image

Table of Contents

Benchmark and model zoo

Supported backbones:

  • ResNet
  • ERFNet
  • VGG
  • MobileNet
  • [] DLA(coming soon)

Supported detectors:

Installation

Clone this repository

git clone https://github.com/turoad/lanedet.git

We call this directory as $LANEDET_ROOT

Create a conda virtual environment and activate it (conda is optional)

conda create -n lanedet python=3.8 -y
conda activate lanedet

Install dependencies

# Install pytorch firstly, the cudatoolkit version should be same in your system.

conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.1 -c pytorch

# Or you can install via pip
pip install torch==1.8.0 torchvision==0.9.0

# Install python packages
python setup.py build develop

Data preparation

CULane

Download CULane. Then extract them to $CULANEROOT. Create link to data directory.

cd $LANEDET_ROOT
mkdir -p data
ln -s $CULANEROOT data/CULane

For CULane, you should have structure like this:

$CULANEROOT/driver_xx_xxframe    # data folders x6
$CULANEROOT/laneseg_label_w16    # lane segmentation labels
$CULANEROOT/list                 # data lists

Tusimple

Download Tusimple. Then extract them to $TUSIMPLEROOT. Create link to data directory.

cd $LANEDET_ROOT
mkdir -p data
ln -s $TUSIMPLEROOT data/tusimple

For Tusimple, you should have structure like this:

$TUSIMPLEROOT/clips # data folders
$TUSIMPLEROOT/lable_data_xxxx.json # label json file x4
$TUSIMPLEROOT/test_tasks_0627.json # test tasks json file
$TUSIMPLEROOT/test_label.json # test label json file

For Tusimple, the segmentation annotation is not provided, hence we need to generate segmentation from the json annotation.

python tools/generate_seg_tusimple.py --root $TUSIMPLEROOT
# this will generate seg_label directory

Getting Started

Training

For training, run

python main.py [configs/path_to_your_config] --gpus [gpu_ids]

For example, run

python main.py configs/resa/resa50_culane.py --gpus 0

Testing

For testing, run

python main.py [configs/path_to_your_config] --validate --load_from [path_to_your_model] [gpu_num]

For example, run

python main.py configs/resa/resa50_culane.py --validate --load_from culane_resnet50.pth --gpus 0

Currently, this code can output the visualization result when testing, just add --view. We will get the visualization result in work_dirs/xxx/xxx/visualization.

For example, run

python main.py configs/resa/resa50_culane.py --validate --load_from culane_resnet50.pth --gpus 0 --view

Inference

See tools/detect.py for detailed information.

python tools/detect.py --help

usage: detect.py [-h] [--img IMG] [--show] [--savedir SAVEDIR]
                 [--load_from LOAD_FROM]
                 config

positional arguments:
  config                The path of config file

optional arguments:
  -h, --help            show this help message and exit
  --img IMG             The path of the img (img file or img_folder), for
                        example: data/*.png
  --show                Whether to show the image
  --savedir SAVEDIR     The root of save directory
  --load_from LOAD_FROM
                        The path of model

To run inference on example images in ./images and save the visualization images in vis folder:

python tools/detect.py configs/resa/resa34_culane.py --img images\
          --load_from resa_r34_culane.pth --savedir ./vis

Contributing

We appreciate all contributions to improve LaneDet. Any pull requests or issues are welcomed.

Licenses

This project is released under the Apache 2.0 license.

Acknowledgement