Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update 2 papers #50

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -1277,6 +1277,7 @@ of China*). [[Paper](https://arxiv.org/abs/2207.07284)][[PyTorch](https://github
* **FTN**: "Fully Transformer Networks for Semantic Image Segmentation", arXiv, 2021 (*Baidu*). [[Paper](https://arxiv.org/abs/2106.04108)]
* **SegFormer**: "SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers", NeurIPS, 2021 (*NVIDIA*). [[Paper](https://arxiv.org/abs/2105.15203)][[PyTorch](https://github.com/NVlabs/SegFormer)]
* **MaskFormer**: "Per-Pixel Classification is Not All You Need for Semantic Segmentation", NeurIPS, 2021 (*UIUC + Facebook*). [[Paper](https://arxiv.org/abs/2107.06278)][[Website](https://bowenc0221.github.io/maskformer/)]
* **GA-Nav**: "GANav: Efficient Terrain Segmentation for Robot Navigation in Unstructured Outdoor Environments", IEEE Robotics and Automation Letters, 2021 (*University of Maryland, College Park*). [[Paper](https://arxiv.org/abs/2103.04233)][[Code](https://github.com/rayguan97/GANav-offroad)][[Website](https://gamma.umd.edu/researchdirections/autonomousdriving/offroad/)]
* **OffRoadTranSeg**: "OffRoadTranSeg: Semi-Supervised Segmentation using Transformers on OffRoad environments", arXiv, 2021 (*IISER. India*). [[Paper](https://arxiv.org/abs/2106.13963)]
* **TRFS**: "Boosting Few-shot Semantic Segmentation with Transformers", arXiv, 2021 (*ETHZ*). [[Paper](https://arxiv.org/abs/2108.02266)]
* **Flying-Guide-Dog**: "Flying Guide Dog: Walkable Path Discovery for the Visually Impaired Utilizing Drones and Transformer-based Semantic Segmentation", arXiv, 2021 (*KIT, Germany*). [[Paper](https://arxiv.org/abs/2108.07007)][[Code (in construction)](https://github.com/EckoTan0804/flying-guide-dog)]
Expand Down
1 change: 1 addition & 0 deletions README_2.md
Original file line number Diff line number Diff line change
Expand Up @@ -142,6 +142,7 @@ If you find this repository useful, please consider citing this list:
* **?**: "Self-Attention Amortized Distributional Projection Optimization for Sliced Wasserstein Point-Cloud Reconstruction", ICML, 2023 (*UT Austin*). [[Paper](https://arxiv.org/abs/2301.04791)]
* **ReCon**: "Contrast with Reconstruct: Contrastive 3D Representation Learning Guided by Generative Pretraining", ICML, 2023 (*Megvii*). [[Paper](https://arxiv.org/abs/2302.02318)][[PyTorch](https://github.com/qizekun/ReCon)]
* **OctFormer**: "OctFormer: Octree-based Transformers for 3D Point Clouds", SIGGRAPH, 2023 (*Peking University*). [[Paper](https://arxiv.org/abs/2305.03045)][[Code (in construction)](https://github.com/octree-nn/octformer)][[Website](https://wang-ps.github.io/octformer)]
* **CrossLoc3D**: "CrossLoc3D: Aerial-Ground Cross-Source 3D Place Recognition", ICCV, 2023 (*University of Maryland, College Park*). [[Paper](https://arxiv.org/abs/2303.17778)][[Code](https://github.com/rayguan97/crossloc3d)]
* **SVDFormer**: "SVDFormer: Complementing Point Cloud via Self-view Augmentation and Self-structure Dual-generator", ICCV, 2023 (*Nanjing University of Aeronautics and Astronautics*). [[Paper](https://arxiv.org/abs/2307.08492)][[PyTorch](https://github.com/czvvd/SVDFormer)]
* **TAP**: "Take-A-Photo: 3D-to-2D Generative Pre-training of Point Cloud Models", ICCV, 2023 (*Tsinghua*). [[Paper](https://arxiv.org/abs/2307.14971)][[PyTorch](https://github.com/wangzy22/TAP)]
* **MATE**: "MATE: Masked Autoencoders are Online 3D Test-Time Learners", ICCV, 2023 (*Graz University of Technology, Austria*). [[Paper](https://arxiv.org/abs/2211.11432)]
Expand Down