This is my personal academic web page. My research interests contain Spiking Neural Network, neurorobotics, event-based cameras, Brain-based visual learning, Active Visual Learning, etc.
This is my dissertation project when I was in the University of Sheffield. All codes are open sourced, you can read more details from my repository.
This project's objective is implementing a visual tracking and control system to satisfy customer's requirments(Following trucks, pedestrians and drones). Finally, I used a gimbal camera and a PixKit chassis to achieve a robust visual tracking-by-detection and control system. This project sucessfully helps that customer to do further research based on it, more details are released in my repository.
POSP is a project for Picking Out Single Package on a long conveyor belt. This algorithm must work with a Depth Camera, like Realsense D415/D435(I used a D415 in this project). All codes can be accessed from my repository.
The Robobus is a autonoumous shuttle bus which designed for urban area. I mainly hold the part of whole perception framework, more details can be accessed from my repository.
Sweeping robots is an autonomous mobile robot for cleaning road surface. I mainly hold the part of trash detection and obstacle segmentation for avoidance, road curb detection module as well. repository.
H-ackerman robot is a robot built up with mounting a ackerman chassis; a ZED camera and a Jetson Xaver Devkit 32G. Based on T&P Project I did in the University of Sheffield, I deployed this on my H-ackerman robot, so I can further more research on it. More details can be found in my repository.
An internship with Dr Li Sun in the University of Sheffield about autonomous mobile robots (a Jaguar robot). Demo1 Demo2
This is a course (COM6009) from the University of Sheffield, I implemented a natural system model with my group mates through using Matlab to simulate a natural process of bee foraging. All codes and additional materials are avaliable in my repository.
This end-to-end robot control project is similar with my miro-cv-system project, both of them are using a end-to-end paradigm to control a mobile robots with differential base. However, this project is based on event cameras and implemented with a Spiking Neural Network rather than ANN. More details can be found on my academic page and repository.
This project mainly shows how to use Nengo to implement a robous, embedded neural adaptive control. Based on Nengo, it can be deployed on CPUs and GPUs as well as Intel’s neuromorphic chip, Loihi. You can read my academic page or find codes in my repository
Doing initial research and development works in Spiking Neural Network for the perception module of autonomous robots through using event cameras in low-light environments, especially for object detection.
Doing research and development works in the perception module of a KUKA KR210 R2700-2 robotic arm project, which includes doing a survey about the comparison of different 3D structured-light cameras; designing a solution of collecting the detailed 3D point cloud data for aluminium material; and doing research about how to calculate the physical error between the aluminium product’s point cloud data and the CAD prototype ground truth data.
Doing research and development works in the perception module of sweeping robots, which includes a trash detection and a segmentation module; road curb detection module. Achieving the design and solution of mounting a stereo camera detection module on a sweeping robot; doing research about data augmentation in object detection research field; and developing a online website for annotating 2D and 3D objects.
Doing research and development works in the perception module based on the framework of Autoware.ai/Autoware.tierIV and Autoware.universe, and these works finally formed into a standard released version for customers.
Doing research and development works in the perception module of a Robobus project, which includes deploying a joint calibration method for cameras and LiDARs; doing research about the comparison of different vehicle specification level cameras; designing a solution of mounting various cameras for robots’ perception modules; doing tests about time synchronisation on hardware-level; doing research about multi-camera BEV perception paradigms for 3D detection modules and doing research about multi-modal sensor fusion algorithms.
A Chinese Invention Patent Public Patent NO: CN116630374B, Title: 目标对象的视觉跟踪方法、装置、存储介质及设备(Visual tracking method, device, storage medium and equipment for target object).
A Chinese Utility Model Patent Public Patent NO: CN220455529U, Title: 一种便携式点云地图采集设备(Portable point cloud map acquisition equipment)
Master's thesis: Application of Computer Vision on a Biomimetic robot
2020-PhD Proposal: A Self-supervised, Self-adaptive Model for the Next Generation Vision-based Robot Navigation
2023-PhD Proposal: Learning before Acting: A Self-Supervised and lifelong, bio-inspired active visual learning framework for Robots