Skip to content

simshineaicamera/SIMCAM_SDK

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

56 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SimCam AI Camera.

The first on-device AI Security Camera for smart home.

camera pic

The SimCam uses AI for facial recognition, pet monitoring, and more via location training. The SimCam has a 5 megapixel image sensor with night vision for still images and 1080 HD videos. The IP65 waterproof rated indoor/outdoor, camera can rotate 360 degrees while tracking objects.
With the open SDK of SimCam, you can customize the settings to meet your needs.

The work flow of SimCam in developer mode is shown in the following figure.

work flow

As shown in the figure, SimCam consists of two embedded boards, one is HI3516 and another is Movidius. HI3516 is used for video capture and video play. Movidius is used for deep learning computing. Two embedded boards communicate by spi.

There are some documents for developers inside docs folder including :

However, here is some guide for really really quick starters 🏃 Let's get started😄

Installation_with_docker

The easiest way to prepare working environment is using docker , we have created docker image in the docker hub, you can easily pull the docker image and start development part. Here you can find how to install docker and use on Ubuntu. Below command help you to pull the docker simcam sdk docker image and run it:

sudo docker run -ti --name=simcamsdk -v /home/username/:/home/username/ simcam/simcamsdk:v1.0 bash
cd SIMCAM_SDK_v1.0.0
ls
README.md  docs  examples  img  libs  src  tools  train

Everything is ready to use! However, if you want install it in your machine below has shown step by step installation process.

Installation

  1. Get the SDK. We will call the directory that you cloned SIMCAM SDK into $SIMCAM_SDK
git clone https://github.com/simshineaicamera/SIMCAM_SDK.git
cd SIMCAM_SDK

Here is SDK folder tree:

SIMCAM_SDK
├── docs
├── examples
├── img
├── libs
├── src
├── tools
└── train

Note* if you want to run only detection demo, you can skip installation process and jump to Run_detection_demo. However, for face recognition demo and future developments you need to install SIMCAM SDK tools.

HI3516 toolchain installation

  1. Enter $SIMCAM_SDK/tools/arm_toolchain folder, and execute below commands:
cd $SIMCAM_SDK/tools/arm_toolchain
sudo chmod +x cross.v300.install
sudo ./cross.v300.install
  1. If you have 64-bit OS, you need to install 32-bit OS compiler compatibility packages:
sudo apt-get install lib32z1
sudo apt-get install lib32stdc++6-4.8-dbg
  1. Add toolchain path to environment variable
vi ~/.bashrc
export PATH="/opt/hisi-linux/x86-arm/arm-hisiv300-linux/target/bin:$PATH"
  1. Reload .bashrc file
source ~/.bashrc
  1. Execute below command to check toolchain version:
arm-hisiv300-linux-gcc  -v

If you can see “gcc version 4.8.3 20131202 (prerelease) (Hisilicon_v300)” at the end of the version description, Congratulations!, the toolchain installation is finished successfully.

Movidius toolchain installation

  1. Enter $SIMCAM_SDK/tools/mv_toolchain/model_conversion/ folder and execute install-ncsdk.sh script
cd $SIMCAM_SDK/tools/mv_toolchain/model_conversion/
sudo ./install-ncsdk.sh

This script will install Movidius model conversion toolkit, and also caffe-ssd cpu version on your system. Execute below commands to see Movidius moldel conversion toolkit.

mvNCCompile -h
mvNCCompile v02.00, Copyright @ Movidius Ltd 2016

usage: mvNCCompile [-h] [-w WEIGHTS] [-in INPUTNODE] [-on OUTPUTNODE]
                   [-o OUTFILE] [-s NSHAVES] [-is INPUTSIZE INPUTSIZE]
                   network

mvNCCompile.py converts Caffe or Tensorflow networks to graph files that can
be used by the Movidius Neural Compute Platform API

positional arguments:
  network               Network file (.prototxt, .meta, .pb, .protobuf)

optional arguments:
  -h, --help            show this help message and exit
  -w WEIGHTS            Weights file (override default same name of .protobuf)
  -in INPUTNODE         Input node name
  -on OUTPUTNODE        Output node name
  -o OUTFILE            Generated graph file (default graph)
  -s NSHAVES            Number of shaves (default 1)
  -is INPUTSIZE INPUTSIZE
                        Input size for networks that do not provide an input
                        shape, width and height expected

If you can see above result, congratulations you have finished installation, it was really easy, right?!

Install opencv

Note* If you have already installed opencv>=3.4.3 versions you can skip this step.

  1. Enter $SIMCAM_SDK/tools/ folder and run install_opencv.sh script :
cd $SIMCAM_SDK/tools/
sudo ./install_opencv.sh

Above command will install opencv 3.4.3 version on your system. Hurray, we have finished installation process completely! We can move on more interesting parts.

Run_detection_demo

  1. Prepare sd card and your SIMCAM AI camera, if you still don't have the camera , buy one from here.
  2. Copy following files into your sd card, files are located inside the $SIMCAM_SDK\src folder:
├── config.txt
├── Detect_Server_Process
├── emotion
├── gender
├── person_face
├── rootfs
├── rtl_hostapd_2G.conf

and also $SIMCAM_SDK\examples\Detect_Demo executable file.

  1. Open config.txt file to be familiar with it, it is in json format, here is written a brief introduction about config file and parameters inside it.
{
"wifi_mode":"AP_MODE",  # In AP_MODE,the camera creates LAN hotspot, in STA_MODE,  connects to the internet.
"if_send":0, # switch to 1 if you want to send detection results to your server.
"server_IP":"118.190.201.26", # you can set your server's IP address.
"port":8001, # your server's port
"model":[{  
    "model_have":1,    # model switch parameter, 0 or 1.   
    "input_width":300, #  input width
    "input_height":300,# input height
    "shave_num":6,     # number of shaves when you convert the detection graph
    "input_color":1,   # input color 1 for RGB, 0 for Gray
    "mean0":127.5,     # image processing parameters
    "mean1":127.5,     # image processing parameters
    "mean2":127.5,     # image processing parameters
    "std":127.5,       # image processing parameters
    "label":3,        # this parameter in use for classification models,
    "conf_thresh":0.5  # confidence threshold
},
{
    "model_have":1,
    "input_width":64,
    "input_height":64,
    "shave_num":1,
    "input_color":0,
    "mean0":127.5,
    "mean1":0,
    "mean2":0,
    "std":127.5,
    "label":3,  # face class id in our person_face detection model.
    "conf_thresh":0.5
},
{
    "model_have":1,
    "input_width":64,
    "input_height":64,
    "shave_num":2,
    "input_color":0,
    "mean0":127.5,
    "mean1":127.5,
    "mean2":127.5,
    "std":127.5,
    "label":3,
    "conf_thresh":0.5
}]
}

SIMCAM camera can run 3 deep learning models simultaneously, first one is detection model, another two are classification models.

  1. Insert sd card to the camera and power on it. In default config.txt, SIMCAM camera runs in AP_MODE, and creates LAN hotspot with default ssid=REVO_DPL, password=87654321, and IP = 192.168.0.1.

  2. Connect your PC or laptop to the REVO_DPL wifi. Open your terminal and run following commands:

telnet 192.168.0.1
  1. Default user login is root and password is blank.

  2. Enter /mnt/DCIM/ folder and run Detect_Demo executable file:

cd /mnt/DCIM/
./Detect_Demo
  1. And that's it, you are running SIMCAM camera with person and face detection,and also gender and emotion classification models. You can see results on the terminal like below:
shmid=0
ntpd: bad address 'us.ntp.org.cn'
send app Detect_Server_Process size :862620
sleep 3s for 2450 boot up
fd : 4,ret : 0
ssd cfg size:136
person_face size:1378032
gender size:124760
emotion size:124760
No detection, spi loop>>>>>>>>>>>>>>>>>>>>>>>>
No detection, spi loop>>>>>>>>>>>>>>>>>>>>>>>>
DetectionModelResult: class: 1, x1: 0.002197, y1: 0.007080, x2: 0.660156, y2: 0.994141
DetectionModelResult: class: 1, x1: 0.011475, y1: 0.007813, x2: 0.713867, y2: 0.995117
DetectionModelResult: class: 3, x1: 0.011475, y1: 0.007813, x2: 0.713867, y2: 0.995117
FirstClassificationModelResult, class0: 0.354736
FirstClassificationModelResult, class1: 0.645020
SecondClassificationModelResult, class0: 0.051086
SecondClassificationModelResult, class1: 0.508789
......

If you want to see real time results with bounding boxes, you can install VLC media player on your machine, and open network streams at this address: rtsp://192.168.0.1 . Another option is Kalay app for mobile phones, both for Android and ios. But please make sure your smart phone is connected REVO_DPL wifi.

Face_recognition_demo

I hope you have already successfully run our detection demo and you have already become close friend with SIMCAM camera. Next step is to be familiar with face recognition demo.

  1. Prepare face images of people in the folder with their names, those who you want to identify by using SIMCAM camera, at least one picture for one person. Here is an example: sample
  2. Copy your folders into: $SIMCAM_SDK/examples/face_recognition/extract_face_features/face_images
cp -r BruceLee/ $SIMCAM_SDK/examples/face_recognition/extract_face_features/face_images
  1. Open terminal inside $SIMCAM_SDK/examples/face_recognition/extract_face_features, and execute following command:
./main face_images/

This command will extract face features of each person and save them into faces.db sqlite database.

  1. We consider, you have already copied files inside $SIMCAM_SDK\src folder into your sd card,and then copy another necessary files into your sd card:
cp $SIMCAM_SDK/examples/face_recognition/extract_face_features/faces.db $sdcardpath # face features database
cp $SIMCAM_SDK/examples/face_recognition/Demo $sdcardpath  # demo executable for face recognition
cp $SIMCAM_SDK/examples/models/lcnn/lcnn $sdcardpath # face feature extractor model,
  1. Open config.txt file and do following changes (marked as changing line):
{
"wifi_mode":"AP_MODE",  
"if_send":0,
"server_IP":"118.190.201.26",
"port":8001,
"model":[{  
    "model_have":1,    
    "input_width":300,
    "input_height":300,
    "shave_num":6,     
    "input_color":1,   
    "mean0":127.5,     
    "mean1":127.5,     
    "mean2":127.5,     
    "std":127.5,       
    "label":3,        
    "conf_thresh":0.5
},
{
    "model_have":1,
    "input_width":128,  # changing line, changed 64->128
    "input_height":128, # changing line  changed 64->128
    "shave_num":2,       # changing line changed 1->2
    "input_color":0,
    "mean0":0,           # changing line changed 127.5->0
    "mean1":0,
    "mean2":0,
    "std":256,  #  changing line changed 127.5->256
    "label":3,  
    "conf_thresh":0.5
},
{
    "model_have":0,     # changing line changed 1->0
    "input_width":64,
    "input_height":64,
    "shave_num":2,
    "input_color":0,
    "mean0":127.5,
    "mean1":127.5,
    "mean2":127.5,
    "std":127.5,
    "label":3,
    "conf_thresh":0.5
}]
}

You can find model parameters information for each model in the model folder.

  1. Connect the camera through telnet and execute Demo. You will get results similar to this:
/mnt/DCIM ./Demo
shmid=0
send app Detect_Server_Process size :862620
sleep 3s for 2450 boot up
fd : 4,ret : 0
ssd cfg size:136
person_face size:1378032
lcnn size:2786192
No detection, spi loop>>>>>>>>>>>>>>>>>>>>>>>>
DetectionModelResult: class: 1, x1: 0.051270, y1: 0.007813, x2: 0.909180, y2: 0.971680
DetectionModelResult: class: 3, x1: 0.051270, y1: 0.007813, x2: 0.909180, y2: 0.971680
Recognized as : BruceLee
......

Train_Caffe_model

Developers can train their own object detection models using Caffe deep learning framework and neural network architecture provided by SimCam team. Training your own custom object detection model is very easy using SIMCAM SDK,if you have already finished installation process, and all you need is video files contain desired object. Here is a simple guide how to accomplish it.

Preparing data for training:

  1. Open SIMCAM SDK folder and copy all your video files into $SIMCAM_SDK/train/data/Images_xmls/videos folder
  2. Open terminal in $SIMCAM_SDK/train/data/Images_xmls folder and run video2img.py python script:
cd $SIMCAM_SDK/train/data/Images_xmls/
python3 video2img.py

It will save one frame as an image per second in JPEGImages folder by default. However, there are options; you can change input folder, output folder and number of frames to save.

python3 video2img.py -h
usage: video2img.py [-h] [--input INPUT] [--output OUTPUT]
                    [--num NUMFRAMEPERSECOND]
optional arguments:
  -h, --help            show this help message and exit
  --input INPUT, -i INPUT
                        video input path
  --output OUTPUT, -o OUTPUT
                        output path
  --num NUMFRAMEPERSECOND, -n NUMFRAMEPERSECOND
                        num frame to get per second
  1. Image annotation. You should annotate extracted images manually. We have provided an open source annotation tool named labelImg. That tool provides the object coordinate in xml format as output for further processing. Simple annotations steps are shown below:
  • Execute labelImg file, open image dataset folder (in our case JPEGImages folder) by clicking the OpenDir icon on the left pane.
  • Image will appear. Click “Change Save Dir” icon and choose Annotations folder as a save folder. Draw rectangle boxes around the objects by clicking the Create RectBox icon and give a label. These boxes are called bounding boxes.
  • Repeat second step for the each image that appears. Below image shows an example of an annotated image. sample
  1. If you finished all above steps completely, you get bunch of xml files (annotations) inside Annotation folder and images JPEGImages folder.
  2. Open terminal inside the $SIMCAM_SDK/train/data/Images_xmls folder and run create_txt.py python script:
python create_txt.py
  1. This python script will create train.txt, test.txt, trainval.txt and val.txt files in the $SIMCAM_SDK/train/data/Images_xmls/ImageSets/Main folder

  2. Go in $SIMCAM_SDK/train/data/lmdb_files folder and create your own labelmap.prototxt file, example has exist in the folder; you can change it according to your dataset.

  3. In the terminal run create_list.sh script :

./create_list.sh

It will generate trainval.txt, test.txt, test_name_size.txt files in the folder 9. Last step is generating lmdb files, lmdb is caffe’s data format for training. In the terminal run create_data.sh script:

./create_data.sh

It will create trainval_lmdb and test_lmdb files in the lmdb folder.

Train model:

So now, you nearly got everything ready to train the Network with the data prepared by yourself. The last thing is, the Network! SIMCAM team provide a robust Network and all necessary scripts for you to train and deploy your own model on the SIMCAM products.

  1. Run gen_model.sh script to generate Network:
./gen_model.sh  <num>

“num” is number of classes in your dataset including the background class. It will create prototxts folder and .prototxt files inside the folder for training, testing and deploying the model.

  1. If you do not have at least Get Force GTX 1060 or higher version of GPU hardware on your Ubuntu machine, you can skip this step. Because while you are installing SIMCAM SDK and Toolchain it installs caffe-ssd CPU version on your machine automatically, in /opt/movidius/ssd-caffe path. Let’s install GPU version of caffe-ssd to speed up your training process.
    To make process simpler, SIMCAM team has provided docker image in docker hub, and Dockerfile for installation GPU version of caffe-ssd. All you should to do is having docker and nvidia-docker on your Ubuntu system. Here is some information about docker and installation process of docker and nvidia-docker. Let’s see simple steps to pull and run simcam/caffe-ssd:gpu docker image into your machine:
sudo docker run --runtime=nvidia -ti --name=simcam  -v /home/your_username:/home/your_username simcam/caffe-ssd:gpu bash

and inside the container locate $SIMCAM_SDK/train/ folder

cd $SIMCAM_SDK/train/

sample

  1. To start training run train.sh script:
./train.sh

sample

That is all, your object detection model is started training.You can get a snapshot in 1000 steps. Total training lasts 120000 iterations. After all, you will get simcam_iter_xxxxx.caffemodel inside snapshot folder. And deploy.prototxt file inside prototxts folder.

Convert to the graph

In installation process, we have seen description of model conversion tool, so let's convert our trained model using that tool.

mvNCCompile deploy.prototxt -w simcam_iter_xxxxx.caffemodel -o graph -s 6

Deep learning model training guide.

  1. Recommended deep learning frameworks.
    a) Caffe, highly recommended. Simcam team has provided robust model architecture based on caffe, and simple guide how to train model with caffe framework.
    b) Keras, If you want to train model with keras , you need to use Microsoft conversion tool MMdnn to convert your pre-trained model into caffe model. Here is the link: https://github.com/Microsoft/MMDnn
    c) Tensorflow.
  2. Detection model:
    Highly recommended one-shot based detection frameworks, such as Moblinet-SSD, and YOLO
  3. Some limitations:
    a) Model size must be less than 20M-30M.
    b) Each layer ‘name’ must be same with ‘top’
    c) The depth-wise convolution layer supports only 3x3 filter size, and slice layer doesn’t support direct connection.
  4. Model optimization tips:
    a) After model is trained, some layers can be merged to speed up the feed-forward process. For example, merge bn-sclae parameter to the conv layer.
    b) Do not use the fc layer of many channel output, which will bring a large amount of parameters, in result slow data movement.
    c) Use 1x1 and 3x3 convolutions , including deep convolutions, it is highly optimized on the movidius chip.

However, SIMACAM team has provided several robust detection models, such as [baby climb](examples/models/babyclimb) detection model, [gesture](examples/models/gesture) detection model, [person car face](examples/models/person_car_face) detection model, [pet magic](examples/models/pet_magic) detection models. Here is some interesting results of detection for some models:
Pet magic detection model:

pet magic


Baby climb detection model:

baby climb

Video_Surveillance_with_SimCam

iSpy is the world's most popular open source video surveillance application. Here is the guide for video surveillance monitoring system with iSpy and SimCam.

  1. Download and install the Open Source Camera Security Software “iSpy”. You can download it from here.

  2. If iSpy has been installed successfully ,Open iSpy, and Click “Add” menu to add SimCam camera for video surveillance.

sample 3. Open Video Source window by clicking "IP Camera" icon.

sample 4. Choose FFMPEG(H264)menu and enter SimCam camera IP address, here is an example rtsp://192.168.168.171(this is Simcam IP address)

  1. Click Test button to check if the camera is connected.

sample

if the camera is connected we can see message box with confirmation "Connected".

sample

  1. When you click "OK" buttons, you will be directed to the "Edit Camera" window. For simplicity, let's left all with default configuration. You just need to click "Finish" button.

sample

Now you can monitor your connected camera.

sample

Using SIMCAM camera and its SDK you can develop your own amazing AI applications. SIMCAM team glad to see you on this repo and wish you good luck on your AI journey!!!

Support

If you need any help, please post us an issue on Github Issues or send us an email [email protected]. You are welcome to contact us for your suggestions!