You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you for this great open-source SLAM. It looks amazingly promising with the current data-driven descriptors.
I have recently started using SegMap, and I was currently looking at how to train the CNN autoencoder and the semantic classifier, based on my own sensor information, in order to obtain my own CNN model and use it for my application (which is an indoor environment, so the segments look quite different).
I followed the tutorial on how to train a new CNN model and semantic classifier, but I could not find any documentation on how to generate your own dataset from your data (that would mean, obatining the .csv files required to actually train a CNN model).
I have seen that there are some tools for writing .csv files in the segmappy's script import_export.py, such as this function. But I haven't found any other script that uses those functions, or any related documentation.
The main question that I would like to ask would be:
What is needed in order to generate all .csv files? It seems that the datasets can be generated by inputting the information to these write functions, such as the classes, matches, features..., or by inputting a .pcd file that contains the segments to be saved as a .csv file. Is there any additional information about the format of the input that these functions expect?
How can I use these tools? Can I generate the dataset .csv files after having obtained a target map with all its segments using SegMap (with either the eigenvalue-based descriptor, or another descriptor)? Or how should be the correct way of obtaining these .csv files?
I was wondering if you could help me figuring out covering how to actually use those functions and the format of the input that they expect. Is there maybe another script of tools for generating datasets that could be shared as well within SegMap?
Thanks in advance for the help!
The text was updated successfully, but these errors were encountered:
Hi,
First of all, thank you for this great open-source SLAM. It looks amazingly promising with the current data-driven descriptors.
I have recently started using SegMap, and I was currently looking at how to train the CNN autoencoder and the semantic classifier, based on my own sensor information, in order to obtain my own CNN model and use it for my application (which is an indoor environment, so the segments look quite different).
I followed the tutorial on how to train a new CNN model and semantic classifier, but I could not find any documentation on how to generate your own dataset from your data (that would mean, obatining the .csv files required to actually train a CNN model).
I have seen that there are some tools for writing .csv files in the segmappy's script import_export.py, such as this function. But I haven't found any other script that uses those functions, or any related documentation.
The main question that I would like to ask would be:
What is needed in order to generate all .csv files? It seems that the datasets can be generated by inputting the information to these write functions, such as the classes, matches, features..., or by inputting a .pcd file that contains the segments to be saved as a .csv file. Is there any additional information about the format of the input that these functions expect?
How can I use these tools? Can I generate the dataset .csv files after having obtained a target map with all its segments using SegMap (with either the eigenvalue-based descriptor, or another descriptor)? Or how should be the correct way of obtaining these .csv files?
I was wondering if you could help me figuring out covering how to actually use those functions and the format of the input that they expect. Is there maybe another script of tools for generating datasets that could be shared as well within SegMap?
Thanks in advance for the help!
The text was updated successfully, but these errors were encountered: