Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

example: Add an example of Pointnet inference implementation #2845

Draft
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

s-Nick
Copy link
Contributor

@s-Nick s-Nick commented Mar 10, 2025

Description

Converted to DRAFT to address comments, it will be closed and moved to oneAPI Samples as suggested once comments are solved

This PR adds a useful example of how to implement a full model using oneDNN. It implements inference of Pointnet model using the ModelNet10 dataset. With the example there is also a python script that, using a pre-trained model, converts data to the point cloud to use as input for the inference example.

The introduction of this example it is necessary to help us moving existing portDNN users to oneDNN, showing that everything they are used to achieve with portDNN it's possible with oneDNN. It would also allow Codeplay Software to properly archive portDNN.

Checklist

General

  • Have you formatted the code using clang-format?

s-Nick added 2 commits March 10, 2025 09:10
Add an implementation of pointnet model as example of more complex NN.
Example working with ModelNet10 input
@s-Nick s-Nick requested review from a team as code owners March 10, 2025 09:23
@github-actions github-actions bot added documentation A request to change/fix/improve the documentation. Codeowner: @oneapi-src/onednn-doc component:examples labels Mar 10, 2025
Comment on lines +21 to +24
def normalize(points):
norm_pointcloud = points - np.mean(points, axis=0)
norm_pointcloud /= np.max(np.linalg.norm(norm_pointcloud, axis=1))
return norm_pointcloud
Copy link
Contributor

@AD2605 AD2605 Mar 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am unable to understand which normalization you are trying to use here. You seem to first subtract the input values from mean calculated along axis 0 and then divide them by the frobenius norm (see np.linalg.norm) calculated along axis 1.

Do you wish to use the Z score normalization (AKA (x - mean) / std) ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This normalization seems to be common when using point cloud.
The idea is to centering all the points and then scaling to a unit sphere. There are more advanced techniques, but I think this simple one is good enough.

Comment on lines 123 to 124
this->out_ptr_
= sycl::malloc_device<T>(in_n * filt_f * oh * ow, sycl_queue);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You needn't extract the queue to simply allocate the pointer, dnnl::memory will allocate the memory for you , similar to how you have approached it for the other memory parameters, like conv_src_mem and conv_weights_mem, any reason you did not go ahead with this approach for the dst_mem ?

Same applies to the other layers

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see https://oneapi-src.github.io/oneDNN/struct_dnnl_memory-2.html#details-structdnnl-1-1memory,
this will also free you from explicit memory management, as the library will own the buffer

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the comment, I changed the approach following this comment.
When I was porting it from portDNN, I switch to raw pointers because that is how it is implemented there and I am not super familiar with oneDNN. Now everything uses dnnl::memory as you pointed out and it is much better. The changes are in 9d2cc7b

Comment on lines 718 to 725
void dump_output() {
auto output = get_output_as_host_vec();
std::cout << "Output:\n";
for (auto e : output) {
std::cout << e << ", ";
}
std::cout << "\n";
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose this is an artefact from debugging, maybe it's better to remove ?

Comment on lines +731 to +739
inline void add_conv_bias_layer(Network<T> &net, dnnl::engine &handle,
dnnl::stream &stream, std::string const &filter_file,
std::string const &bias_file, const int in_n, const int in_c,
const int in_h, const int in_w, const int filt_f, const int filt_c,
const int filt_h, const int filt_w) {
net.add_layer(std::make_unique<ConvBiasLayer<T>>(handle, stream,
filter_file, bias_file, in_n, in_c, in_h, in_w, filt_f, filt_c,
filt_h, filt_w));
}
Copy link
Contributor

@AD2605 AD2605 Mar 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These seem to be thin wrappers around directly calling net.add_layer(std::make_unique(....)),
do they provide any utility which I may be missing ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same applies to the other add_* wrapper as well

file_directory + bn_var_file, batch, out_c, 1, 1);
}

void pointnet(dnnl::engine::kind engine_kind) {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove ?

Comment on lines 1005 to 1008
add_softmax_layer(feature_transform_block, eng, stream, 32, 10, 1, 1);

add_log_layer(feature_transform_block, eng, stream, 32, 10, 1, 1);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not suppose these need to be two separate layers, you can also use logsoftmax, which does exactly this,
see https://oneapi-src.github.io/oneDNN/dev_guide_softmax.html

You could also use the eltwise post-op to apply the log which softmax supports

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in 3faeea6

std::cout << "classed as " << mode << " (i.e., " << object_map[mode] << ")"
<< std::endl;

#if CHECK_PERF
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: ideally this should be a command line argument so that there's no need to recompile when trying to measure perf

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Honestly, I am not sure we want something like this for a oneDNN example. It was in the portDNN code and I decided to keep for now and get some feedback . When I migrate everything to oneDNN Samples, I'll check if there is an easy and user friendly way to enable/disable it, otherwise I'll remove this part.

s-Nick added 4 commits March 12, 2025 09:07
Output of a layer was stored in a pointer, now the output is stored in
dnnl::memory object that is passed to the following layer. This allow
the removals the need synchronizing after each layer execution.
The oneDNN samples are built in the default CMake configuration. The sample
is built by the target `network-pointnet-cpp`. The samples must first
be passed the directory where the binary weights files are stored and the second
argument should be the preprocessed pointcloud that should be classified. The expected
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oneDNN examples are used in various CI environments and across different architectures. Dependency on Pytorch would be a bit of a hassle. I would suggest to put this one into oneAPI Samples instead of the main repo.

+@onednnsupporttriage

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for suggesting a better place for our sample. Once all comments are solved, I'll open another PR to the appropriate repo.

@s-Nick s-Nick marked this pull request as draft March 12, 2025 16:24
Copy link
Contributor

@ranukund ranukund left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few edits suggested, please incorporate as you see fit.

@@ -0,0 +1,35 @@
# PointNet Convolutional Neural Network Sample for 3D Pointcloud Classification

[PointNet][pointnet-paper] is a convolutional neural network architecture for applications concerning 3D recognition such as object classification and part segmentation. These sample codes implement a variant of PointNet for 3D object classification, for inference only with ModelNet10, showing a larger example of using oneDNN. Some rough instructions for how it might be used are provided.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
[PointNet][pointnet-paper] is a convolutional neural network architecture for applications concerning 3D recognition such as object classification and part segmentation. These sample codes implement a variant of PointNet for 3D object classification, for inference only with ModelNet10, showing a larger example of using oneDNN. Some rough instructions for how it might be used are provided.
[PointNet][pointnet-paper] is a convolutional neural network architecture for applications concerning 3D recognition such as object classification and part segmentation. These sample codes implement a variant of PointNet for 3D object classification, for inference only with ModelNet10, providing a comprehensive example of using oneDNN. You can see the following initial instructions on using the samples.


[PointNet][pointnet-paper] is a convolutional neural network architecture for applications concerning 3D recognition such as object classification and part segmentation. These sample codes implement a variant of PointNet for 3D object classification, for inference only with ModelNet10, showing a larger example of using oneDNN. Some rough instructions for how it might be used are provided.

## Obtaining the model weights and classes and preparing an input pointcloud
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## Obtaining the model weights and classes and preparing an input pointcloud
## Obtain the Model Weights and Classes and Prepare an Input pointcloud


## Obtaining the model weights and classes and preparing an input pointcloud

A preprocessing script is provided which unpacks the weights from a pretrained pytorch model. The script also prepares an input pointcloud for testing inference. The pointcloud is made from 3D scans taken from the [ModelNet10][modelnet] dataset. The script requires an installation of [PyTorch][pytorch]. First download the pretrained PointNet weights and move the pth file into the same directory of the model.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
A preprocessing script is provided which unpacks the weights from a pretrained pytorch model. The script also prepares an input pointcloud for testing inference. The pointcloud is made from 3D scans taken from the [ModelNet10][modelnet] dataset. The script requires an installation of [PyTorch][pytorch]. First download the pretrained PointNet weights and move the pth file into the same directory of the model.
A preprocessing script is provided which unpacks the weights from a pre-trained PyTorch model. The script also prepares an input pointcloud for testing inference. The pointcloud is made from 3D scans taken from the [ModelNet10][modelnet] dataset. The script requires an installation of [PyTorch][pytorch].
First, download the pre-trained PointNet weights and then move the pth file into the same directory containing the model.

python3 prepareData.py ModelNet10/ pointnet_model.pth
```

The weights will be saved to `data/` and the input pointcloud will be saved as `itemName_cloud.bin`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The weights will be saved to `data/` and the input pointcloud will be saved as `itemName_cloud.bin`
The weights will be saved to `data/` and the input pointcloud will be saved as `itemName_cloud.bin`.


The weights will be saved to `data/` and the input pointcloud will be saved as `itemName_cloud.bin`

## Testing on a pointcloud
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
## Testing on a pointcloud
## Test on a pointcloud

is built by the target `network-pointnet-cpp`. The samples must first
be passed the directory where the binary weights files are stored and the second
argument should be the preprocessed pointcloud that should be classified. The expected
output is of a classification index and a series of times in nanoseconds that corresond
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
output is of a classification index and a series of times in nanoseconds that corresond
output is a classification index and a series of times in nanoseconds that correspond

## Testing on a pointcloud

The oneDNN samples are built in the default CMake configuration. The sample
is built by the target `network-pointnet-cpp`. The samples must first
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the directory containing the binary weight files passed as the first argument when running a oneDNN sample? Suggesting rewrite for consideration to improve readability, please modify as you see fit

To test a sample, the directory where the binary weights files are stored must be passed as the first argument. The second argument should be the preprocessed pointcloud that should be classified.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for your feedback @ranukund, I update everything in dedc77d

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:examples documentation A request to change/fix/improve the documentation. Codeowner: @oneapi-src/onednn-doc
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants