Skip to content

Vectorized CNN implementation from scratch using only numpy

Notifications You must be signed in to change notification settings

Najib-Haq/Vectorized-CNN-from-scratch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Vectorized Convolutional Neural Network from Scratch

This repository contains the final assignment of CSE 472: Machine Learning Sessional offered by the CSE Department of BUET. This repository contains all the submissions for that course.

The task is to build a vectorized version of a Convolutional Neural Network using only numpy without any deeplearning frameworks. Training and testing of the developed model is done on the NumtaDB: Bengali Handwritten Digits dataset. The trainig-a, training-b and training-c folders are used for training and training-d is used for testing. A detailed report on the various experiments and final results can be found in 1705044_report.pdf.

Contents

Model Blocks

The repository contains fully vectorized implementations for Convolution, MaxPool, Fully Connected layers. Please see the model section of config.yaml file to understand how to build a model with these blocks and relevant arguments.

Augmentations

This repository contains various augmentation capabilities including Opening, Dilated, Mixup, BBox-crop and Contour-Cutout. Please see augment section of config.yaml for various parameters. Details can be found in the report. augmentations

LR Scheduler

A simple implementation of the ReduceLROnPlateau is given. Please see lr_scheduler section of config.yaml for the parameters.

WandB logging

Aside from normal text based logs, there is also support for Weights and Bias loggings. You would need to install wandb python package first and make a wandb project. In the config.yaml file, change use_wandb to true and change project and entity under wandb according to your project settings. You can just run your code and see live updates on your dashboard. So sit back and relax! All the logs of my training can be found here.

Testing Your Code using PYTest

Different tests are developed using pytest for the testing of the different model blocks. The forward and backpropagation of each individual models are compared to those of similar blocks in pytorch. To run the tests:

%cd test
pytest

Training

In order to run your model in the numta dataset, please download the dataset from here to the resources directory. Alternatively, you can just set the data_dir to the dataset's location. Please go through the config.yaml file and change the values accordingly. To run the training script, first install the packages from requirements.txt and run python train.py

Inference

Infer any directory of images using your developed model by changing checkpoint_path in the config.yaml file and executing python test.py $test_directory. A pretrained model is available here. If you have a ground truth csv, enter the path in gt_csv to get accuracy, f1 scores and a confusion matrix of the prediction. This is the confusion matrix for training-d folder of the dataset:

Results

The final F1 macro scores of the model are as follows:

Validation Testing (training-d) Testing (In evaluation)
0.9769 0.98133 0.9783

Resources

Additionally Pseudo Labels1, Pseudo Labels2, Pretrained Checkpoint and Report are given under the resources folder. The notebook which trained the pretrained checkpoint can be found here.