-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to convert CARLA coordinates to GNSS coordinates #5
Comments
Hi, the opv2v format is using the same coordinate system as CALRA. This is a left-handed coordinate system with x and y as ground plane and z axis up. To convert, please check one of my old code, and you can also verify it yourself by converting bbx into pointcloud: https://gist.github.com/zhz03/e1f07e740d569ea210d55dbcc0051085 |
Hi, we did not get how to proceed with your code. Looking on internet we found that we need to identify the GNSS origin. Which GNSS origin is defined in the CARLA simulation settings? Currently, we found this one: origin_lat = 34.06855089153448, origin_lon = -118.44560862026573. In any case could you provide us either the latitude, longitude, altitude that you used as GNSS origin? |
Sorry for the confusion, the code is just an example to show how to convert bounding boxes from the CARLA coordinate to the real-world coordinate. I thought that's what you wanted to ask before since the CARLA coordinate is a left-handed coordinate, which is quite different from real-world situations. As for GNSS info, that's something we cannot provide, that level of information is too sensitive and as for using the dataset, there is no need to use such information. In the YAML file, we've provided the vehicle's global pose of the map at each time frame. If you want to project bbx to lidar just like this: https://raw.githubusercontent.com/zhz03/picx-images-hosting/master/v2x_real_bbx_projection.45qmm9676.webp |
Thank you for your detailed reply and sorry for the misunderstanding. We wanted to know how to convert CARLA coordinates to real-world coordinates (exactly what you pointed out at the beginning). From what we understood, we need to use the example code you shared before. Just a clarification regarding this point: what are the components of the features array you pass to the function? We thought about location, extent and angle from the yaml files but we saw that there are 10 components. Also, the vehicle's global pose goes under "true_ego_pose"? Thank you in advance for your availability |
For more details regarding the dataset format, you can check https://opencood.readthedocs.io/en/latest/md_files/data_annotation_tutorial.html. The input to the function should be location, extent and angle from the yaml. |
Dear authors,
Thank you for your valuable work.
We would like to know if it is possible to convert the CARLA coordinates in the dataset to GNSS coordinates.
We look forward to your kind reply.
The text was updated successfully, but these errors were encountered: