-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
example: Add an example of Pointnet inference implementation #2845
base: main
Are you sure you want to change the base?
Conversation
Add an implementation of pointnet model as example of more complex NN. Example working with ModelNet10 input
def normalize(points): | ||
norm_pointcloud = points - np.mean(points, axis=0) | ||
norm_pointcloud /= np.max(np.linalg.norm(norm_pointcloud, axis=1)) | ||
return norm_pointcloud |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am unable to understand which normalization you are trying to use here. You seem to first subtract the input values from mean calculated along axis 0 and then divide them by the frobenius norm (see np.linalg.norm) calculated along axis 1.
Do you wish to use the Z score normalization (AKA (x - mean) / std
) ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This normalization seems to be common when using point cloud.
The idea is to centering all the points and then scaling to a unit sphere. There are more advanced techniques, but I think this simple one is good enough.
examples/network/pointnet.cpp
Outdated
this->out_ptr_ | ||
= sycl::malloc_device<T>(in_n * filt_f * oh * ow, sycl_queue); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You needn't extract the queue to simply allocate the pointer, dnnl::memory
will allocate the memory for you , similar to how you have approached it for the other memory parameters, like conv_src_mem
and conv_weights_mem
, any reason you did not go ahead with this approach for the dst_mem
?
Same applies to the other layers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see https://oneapi-src.github.io/oneDNN/struct_dnnl_memory-2.html#details-structdnnl-1-1memory,
this will also free you from explicit memory management, as the library will own the buffer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the comment, I changed the approach following this comment.
When I was porting it from portDNN, I switch to raw pointers
because that is how it is implemented there and I am not super familiar with oneDNN. Now everything uses dnnl::memory
as you pointed out and it is much better. The changes are in 9d2cc7b
examples/network/pointnet.cpp
Outdated
void dump_output() { | ||
auto output = get_output_as_host_vec(); | ||
std::cout << "Output:\n"; | ||
for (auto e : output) { | ||
std::cout << e << ", "; | ||
} | ||
std::cout << "\n"; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose this is an artefact from debugging, maybe it's better to remove ?
inline void add_conv_bias_layer(Network<T> &net, dnnl::engine &handle, | ||
dnnl::stream &stream, std::string const &filter_file, | ||
std::string const &bias_file, const int in_n, const int in_c, | ||
const int in_h, const int in_w, const int filt_f, const int filt_c, | ||
const int filt_h, const int filt_w) { | ||
net.add_layer(std::make_unique<ConvBiasLayer<T>>(handle, stream, | ||
filter_file, bias_file, in_n, in_c, in_h, in_w, filt_f, filt_c, | ||
filt_h, filt_w)); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These seem to be thin wrappers around directly calling net.add_layer(std::make_unique(....))
,
do they provide any utility which I may be missing ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same applies to the other add_*
wrapper as well
examples/network/pointnet.cpp
Outdated
file_directory + bn_var_file, batch, out_c, 1, 1); | ||
} | ||
|
||
void pointnet(dnnl::engine::kind engine_kind) {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove ?
examples/network/pointnet.cpp
Outdated
add_softmax_layer(feature_transform_block, eng, stream, 32, 10, 1, 1); | ||
|
||
add_log_layer(feature_transform_block, eng, stream, 32, 10, 1, 1); | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not suppose these need to be two separate layers, you can also use logsoftmax, which does exactly this,
see https://oneapi-src.github.io/oneDNN/dev_guide_softmax.html
You could also use the eltwise post-op to apply the log which softmax supports
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Addressed in 3faeea6
std::cout << "classed as " << mode << " (i.e., " << object_map[mode] << ")" | ||
<< std::endl; | ||
|
||
#if CHECK_PERF |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: ideally this should be a command line argument so that there's no need to recompile when trying to measure perf
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly, I am not sure we want something like this for a oneDNN example. It was in the portDNN code and I decided to keep for now and get some feedback . When I migrate everything to oneDNN Samples, I'll check if there is an easy and user friendly way to enable/disable it, otherwise I'll remove this part.
Output of a layer was stored in a pointer, now the output is stored in dnnl::memory object that is passed to the following layer. This allow the removals the need synchronizing after each layer execution.
properly enable bias in FCLayer
examples/network/README.md
Outdated
The oneDNN samples are built in the default CMake configuration. The sample | ||
is built by the target `network-pointnet-cpp`. The samples must first | ||
be passed the directory where the binary weights files are stored and the second | ||
argument should be the preprocessed pointcloud that should be classified. The expected |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oneDNN examples are used in various CI environments and across different architectures. Dependency on Pytorch would be a bit of a hassle. I would suggest to put this one into oneAPI Samples instead of the main repo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for suggesting a better place for our sample. Once all comments are solved, I'll open another PR to the appropriate repo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few edits suggested, please incorporate as you see fit.
examples/network/README.md
Outdated
@@ -0,0 +1,35 @@ | |||
# PointNet Convolutional Neural Network Sample for 3D Pointcloud Classification | |||
|
|||
[PointNet][pointnet-paper] is a convolutional neural network architecture for applications concerning 3D recognition such as object classification and part segmentation. These sample codes implement a variant of PointNet for 3D object classification, for inference only with ModelNet10, showing a larger example of using oneDNN. Some rough instructions for how it might be used are provided. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[PointNet][pointnet-paper] is a convolutional neural network architecture for applications concerning 3D recognition such as object classification and part segmentation. These sample codes implement a variant of PointNet for 3D object classification, for inference only with ModelNet10, showing a larger example of using oneDNN. Some rough instructions for how it might be used are provided. | |
[PointNet][pointnet-paper] is a convolutional neural network architecture for applications concerning 3D recognition such as object classification and part segmentation. These sample codes implement a variant of PointNet for 3D object classification, for inference only with ModelNet10, providing a comprehensive example of using oneDNN. You can see the following initial instructions on using the samples. |
examples/network/README.md
Outdated
|
||
[PointNet][pointnet-paper] is a convolutional neural network architecture for applications concerning 3D recognition such as object classification and part segmentation. These sample codes implement a variant of PointNet for 3D object classification, for inference only with ModelNet10, showing a larger example of using oneDNN. Some rough instructions for how it might be used are provided. | ||
|
||
## Obtaining the model weights and classes and preparing an input pointcloud |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
## Obtaining the model weights and classes and preparing an input pointcloud | |
## Obtain the Model Weights and Classes and Prepare an Input pointcloud |
examples/network/README.md
Outdated
|
||
## Obtaining the model weights and classes and preparing an input pointcloud | ||
|
||
A preprocessing script is provided which unpacks the weights from a pretrained pytorch model. The script also prepares an input pointcloud for testing inference. The pointcloud is made from 3D scans taken from the [ModelNet10][modelnet] dataset. The script requires an installation of [PyTorch][pytorch]. First download the pretrained PointNet weights and move the pth file into the same directory of the model. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A preprocessing script is provided which unpacks the weights from a pretrained pytorch model. The script also prepares an input pointcloud for testing inference. The pointcloud is made from 3D scans taken from the [ModelNet10][modelnet] dataset. The script requires an installation of [PyTorch][pytorch]. First download the pretrained PointNet weights and move the pth file into the same directory of the model. | |
A preprocessing script is provided which unpacks the weights from a pre-trained PyTorch model. The script also prepares an input pointcloud for testing inference. The pointcloud is made from 3D scans taken from the [ModelNet10][modelnet] dataset. The script requires an installation of [PyTorch][pytorch]. | |
First, download the pre-trained PointNet weights and then move the pth file into the same directory containing the model. |
examples/network/README.md
Outdated
python3 prepareData.py ModelNet10/ pointnet_model.pth | ||
``` | ||
|
||
The weights will be saved to `data/` and the input pointcloud will be saved as `itemName_cloud.bin` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The weights will be saved to `data/` and the input pointcloud will be saved as `itemName_cloud.bin` | |
The weights will be saved to `data/` and the input pointcloud will be saved as `itemName_cloud.bin`. |
examples/network/README.md
Outdated
|
||
The weights will be saved to `data/` and the input pointcloud will be saved as `itemName_cloud.bin` | ||
|
||
## Testing on a pointcloud |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
## Testing on a pointcloud | |
## Test on a pointcloud |
examples/network/README.md
Outdated
is built by the target `network-pointnet-cpp`. The samples must first | ||
be passed the directory where the binary weights files are stored and the second | ||
argument should be the preprocessed pointcloud that should be classified. The expected | ||
output is of a classification index and a series of times in nanoseconds that corresond |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
output is of a classification index and a series of times in nanoseconds that corresond | |
output is a classification index and a series of times in nanoseconds that correspond |
examples/network/README.md
Outdated
## Testing on a pointcloud | ||
|
||
The oneDNN samples are built in the default CMake configuration. The sample | ||
is built by the target `network-pointnet-cpp`. The samples must first |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the directory containing the binary weight files passed as the first argument when running a oneDNN sample? Suggesting rewrite for consideration to improve readability, please modify as you see fit
To test a sample, the directory where the binary weights files are stored must be passed as the first argument. The second argument should be the preprocessed pointcloud that should be classified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Description
Converted to DRAFT to address comments, it will be closed and moved to oneAPI Samples as suggested once comments are solved
This PR adds a useful example of how to implement a full model using oneDNN. It implements inference of Pointnet model using the ModelNet10 dataset. With the example there is also a python script that, using a pre-trained model, converts data to the point cloud to use as input for the inference example.
The introduction of this example it is necessary to help us moving existing portDNN users to oneDNN, showing that everything they are used to achieve with portDNN it's possible with oneDNN. It would also allow Codeplay Software to properly archive portDNN.
Checklist
General