Skip to content

Commit 8a99628

Browse files
authored
Update README.md
1 parent 54512aa commit 8a99628

File tree

1 file changed

+153
-0
lines changed

1 file changed

+153
-0
lines changed

README.md

Lines changed: 153 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,154 @@
11
# LS-CNN
2+
PyTorch implementation of LS-CNN architecture from the paper
3+
4+
- [LS-CNN: Characterizing Local Patches at Multiple Scales for Face Recognition](https://ieeexplore.ieee.org/abstract/document/8865656) by Qiangchang Wang and Guodong Guo
5+
6+
7+
## About LS-CNN
8+
Since similar discriminative face regions may occur at different scales, a new backbone network HSNet which extracts multi-scale features has been proposed in the paper. HSNet excracts multi-scale features from two harmonious perspectives.
9+
10+
- Different kernels in a single layer, concatenation of features from different layers.
11+
- To identify local patches from which facial features can be extracted, new spatial attention is used in the paper.
12+
13+
14+
- Also in a CNN, channels in high-level layers represent high-level representations. So, a channel attention is also used in the paper to emphasize important channels and suppress less informative ones automatically.
15+
- This spatial and channel attention is called **Dual Face Attention (DFA)**
16+
##
17+
The goal of this package is to provide a nice and simple object-oriented implementation of the architecture. The individual submodules are cleanly separated into self-contained blocks, that come with documentation and typings, and that are therefore easy to import and reuse.
18+
## Requirements and installation
19+
This project is based on Python 3.6+ and PyTorch 1.7.0+
20+
21+
Within the correct environment, install the package from the repository:
22+
```bash
23+
pip install git+https://github.com/Ksuryateja/LS-CNN
24+
```
25+
## Usage
26+
Either load the predefined network from `model.py`
27+
```Python
28+
from model import LSCNN
29+
30+
net = LSCNN()
31+
```
32+
Or use the modules for custom architecture:
33+
```Python
34+
from torch.nn import Sequential
35+
from LSCNN import TransitionBlock, DenseBlock
36+
37+
class Module(Sequential):
38+
def __init__(self, in_channels, out_channels):
39+
super(Module, self).__init__()
40+
41+
self.dense = DenseBlock(in_channels, growth_rate=4, num_layers=2,
42+
concat_input=True, dense_layer_params={'dropout': 0.2})
43+
44+
self.transition = TransitionBlock(self.dense.out_channels, out_channels)
45+
46+
net = Module(10, 4)
47+
```
48+
49+
50+
Module((dense): DenseBlock(60, 1*30+60=90)(
51+
(DenseBlock_0): Sequential(
52+
(DFA_0): DFA(60, 16)(
53+
(LANet): LALayer(
54+
(spatial_atten): Sequential(
55+
(0): Conv2d(60, 3, kernel_size=(1, 1), stride=(1, 1))
56+
(1): ReLU(inplace=True)
57+
(2): Conv2d(3, 1, kernel_size=(1, 1), stride=(1, 1))
58+
(3): Sigmoid()
59+
)
60+
)
61+
(SENet): SELayer(
62+
(avg_pool): AdaptiveAvgPool2d(output_size=1)
63+
(fc1): Linear(in_features=60, out_features=3, bias=False)
64+
(relu): ReLU(inplace=True)
65+
(fc2): Linear(in_features=3, out_features=60, bias=False)
66+
(sigmoid): Sigmoid()
67+
)
68+
)
69+
(DenseLayer_0): DenseLayer(60, 30)(
70+
(branch1_1x1): BasicConv2d(
71+
(conv): Conv2d(60, 120, kernel_size=(1, 1), stride=(1, 1), bias=False)
72+
(bn): BatchNorm2d(120, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
73+
)
74+
(branch2_1x1): BasicConv2d(
75+
(conv): Conv2d(60, 120, kernel_size=(1, 1), stride=(1, 1), bias=False)
76+
(bn): BatchNorm2d(120, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
77+
)
78+
(branch3_1x1): BasicConv2d(
79+
(conv): Conv2d(60, 30, kernel_size=(1, 1), stride=(1, 1), bias=False)
80+
(bn): BatchNorm2d(30, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
81+
)
82+
(branch1_3x3): BasicConv2d(
83+
(conv): Conv2d(120, 30, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
84+
(bn): BatchNorm2d(30, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
85+
)
86+
(branch1_3x3_2): BasicConv2d(
87+
(conv): Conv2d(30, 30, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
88+
(bn): BatchNorm2d(30, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
89+
)
90+
(branch2_3x3): BasicConv2d(
91+
(conv): Conv2d(120, 30, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
92+
(bn): BatchNorm2d(30, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
93+
)
94+
(output): BasicConv2d(
95+
(conv): Conv2d(90, 30, kernel_size=(1, 1), stride=(1, 1), bias=False)
96+
(bn): BatchNorm2d(30, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
97+
)
98+
)
99+
)
100+
)
101+
(transition): TransitionBlock()(
102+
(transition_block): Sequential(
103+
(DFA): DFA(90, 16)(
104+
(LANet): LALayer(
105+
(spatial_atten): Sequential(
106+
(0): Conv2d(90, 5, kernel_size=(1, 1), stride=(1, 1))
107+
(1): ReLU(inplace=True)
108+
(2): Conv2d(5, 1, kernel_size=(1, 1), stride=(1, 1))
109+
(3): Sigmoid()
110+
)
111+
)
112+
(SENet): SELayer(
113+
(avg_pool): AdaptiveAvgPool2d(output_size=1)
114+
(fc1): Linear(in_features=90, out_features=5, bias=False)
115+
(relu): ReLU(inplace=True)
116+
(fc2): Linear(in_features=5, out_features=90, bias=False)
117+
(sigmoid): Sigmoid()
118+
)
119+
)
120+
(Transition layer): TransitionLayer(90, 45)(
121+
(branch1_1x1): BasicConv2d(
122+
(conv): Conv2d(90, 45, kernel_size=(1, 1), stride=(1, 1), bias=False)
123+
(bn): BatchNorm2d(45, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
124+
)
125+
(branch2_1x1): BasicConv2d(
126+
(conv): Conv2d(90, 45, kernel_size=(1, 1), stride=(1, 1), bias=False)
127+
(bn): BatchNorm2d(45, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
128+
)
129+
(branch3_1x1): BasicConv2d(
130+
(conv): Conv2d(90, 45, kernel_size=(1, 1), stride=(1, 1), bias=False)
131+
(bn): BatchNorm2d(45, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
132+
)
133+
(branch1_3x3): BasicConv2d(
134+
(conv): Conv2d(45, 45, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
135+
(bn): BatchNorm2d(45, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
136+
)
137+
(branch1_3x3_2): BasicConv2d(
138+
(conv): Conv2d(45, 45, kernel_size=(3, 3), stride=(2, 2), bias=False)
139+
(bn): BatchNorm2d(45, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
140+
)
141+
(branch2_3x3): BasicConv2d(
142+
(conv): Conv2d(45, 45, kernel_size=(3, 3), stride=(2, 2), bias=False)
143+
(bn): BatchNorm2d(45, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
144+
)
145+
(branch3_3x3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=False)
146+
(output): BasicConv2d(
147+
(conv): Conv2d(135, 45, kernel_size=(1, 1), stride=(1, 1), bias=False)
148+
(bn): BatchNorm2d(45, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
149+
)
150+
)
151+
)
152+
)
153+
)
154+

0 commit comments

Comments
 (0)