Update README.md
Browse files
README.md
CHANGED
|
@@ -5,18 +5,26 @@ language:
|
|
| 5 |
---
|
| 6 |
# Scene Flow Models for Autonomous Driving Dataset
|
| 7 |
|
| 8 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
|
| 10 |
-
|
| 11 |
|
| 12 |
-
|
| 13 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
The files we included and all test result reports can be found [v2 leaderboard](https://github.com/KTH-RPL/DeFlow/discussions/6) and [v1 leaderboard](https://github.com/KTH-RPL/DeFlow/discussions/2).
|
| 16 |
-
* [
|
| 17 |
-
* [deflow_best.ckpt](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/deflow_best.ckpt): trained on Argoverse 2 sensor set with ground truth using deflowLoss function.
|
| 18 |
-
* [fastflow3d_best.ckpt](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/fastflow3d_best.ckpt): trained on Argoverse 2 sensor set with ground truth using ff3dLoss function.
|
| 19 |
-
* ... more models on the way
|
| 20 |
* [demo_data.zip](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip): 613Mb, a mini-dataset for user to quickly run train/val code. Check usage in [this section](https://github.com/KTH-RPL/SeFlow?tab=readme-ov-file#1-run--train).
|
| 21 |
* [waymo_map.tar.gz](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/waymo_map.tar.gz): to successfully process waymo data with ground segmentation included to unified h5 file. Check usage in [this README](https://github.com/KTH-RPL/SeFlow/blob/main/dataprocess/README.md#waymo-dataset).
|
| 22 |
|
|
@@ -25,30 +33,23 @@ The files we included and all test result reports can be found [v2 leaderboard](
|
|
| 25 |
<!-- <br> -->
|
| 26 |
You can try following methods in our code without any effort to make your own benchmark.
|
| 27 |
|
|
|
|
|
|
|
| 28 |
- [x] [SeFlow](https://arxiv.org/abs/2407.01702) (Ours π): ECCV 2024
|
| 29 |
- [x] [DeFlow](https://arxiv.org/abs/2401.16122) (Ours π): ICRA 2024
|
| 30 |
- [x] [FastFlow3d](https://arxiv.org/abs/2103.01306): RA-L 2021
|
| 31 |
- [x] [ZeroFlow](https://arxiv.org/abs/2305.10424): ICLR 2024, their pre-trained weight can covert into our format easily through [the script](https://github.com/KTH-RPL/SeFlow/tools/zerof2ours.py).
|
| 32 |
- [ ] [NSFP](https://arxiv.org/abs/2111.01253): NeurIPS 2021, faster 3x than original version because of [our CUDA speed up](https://github.com/KTH-RPL/SeFlow/assets/cuda/README.md), same (slightly better) performance. Done coding, public after review.
|
| 33 |
- [ ] [FastNSF](https://arxiv.org/abs/2304.09121): ICCV 2023. Done coding, public after review.
|
| 34 |
-
<!-- - [ ] [Flow4D](https://arxiv.org/abs/2407.07995): 1st supervise network in the new leaderboard. Done coding, public after review. -->
|
| 35 |
- [ ] ... more on the way
|
| 36 |
|
| 37 |
</details>
|
| 38 |
|
| 39 |
-
##
|
| 40 |
|
| 41 |
-
|
| 42 |
|
| 43 |
-
```
|
| 44 |
-
@inproceedings{zhang2024deflow,
|
| 45 |
-
author={Zhang, Qingwen and Yang, Yi and Fang, Heng and Geng, Ruoyu and Jensfelt, Patric},
|
| 46 |
-
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
|
| 47 |
-
title={{DeFlow}: Decoder of Scene Flow Network in Autonomous Driving},
|
| 48 |
-
year={2024},
|
| 49 |
-
pages={2105-2111},
|
| 50 |
-
doi={10.1109/ICRA57147.2024.10610278}
|
| 51 |
-
}
|
| 52 |
@inproceedings{zhang2024seflow,
|
| 53 |
author={Zhang, Qingwen and Yang, Yi and Li, Peizheng and Andersson, Olov and Jensfelt, Patric},
|
| 54 |
title={{SeFlow}: A Self-Supervised Scene Flow Method in Autonomous Driving},
|
|
@@ -58,4 +59,41 @@ You can try following methods in our code without any effort to make your own be
|
|
| 58 |
organization={Springer},
|
| 59 |
doi={10.1007/978-3-031-73232-4_20},
|
| 60 |
}
|
| 61 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 5 |
---
|
| 6 |
# Scene Flow Models for Autonomous Driving Dataset
|
| 7 |
|
| 8 |
+
<p align="center">
|
| 9 |
+
<a href="https://github.com/KTH-RPL/OpenSceneFlow">
|
| 10 |
+
<picture>
|
| 11 |
+
<img alt="opensceneflow" src="https://github.com/KTH-RPL/OpenSceneFlow/blob/main/assets/docs/logo.png?raw=true" width="600">
|
| 12 |
+
</picture><br>
|
| 13 |
+
</a>
|
| 14 |
+
</p>
|
| 15 |
|
| 16 |
+
π If you find [*OpenSceneFlow*](https://github.com/KTH-RPL/OpenSceneFlow) useful to your research, please cite [**our works** π](#cite-us) and give [a star π](https://github.com/KTH-RPL/OpenSceneFlow) as encouragement. (ΰ©Λκ³βΛ)ΰ©β§
|
| 17 |
|
| 18 |
+
OpenSceneFlow is a codebase for point cloud scene flow estimation.
|
| 19 |
+
Please check the usage on [KTH-RPL/OpenSceneFlow](https://github.com/KTH-RPL/OpenSceneFlow).
|
| 20 |
+
|
| 21 |
+
<!-- - [DeFlow](https://arxiv.org/abs/2401.16122): Supervised learning scene flow, model included is trained on Argoverse 2.
|
| 22 |
+
- [SeFlow](https://arxiv.org/abs/2407.01702): **Self-Supervised** learning scene flow, model included is trained on Argoverse 2. Paper also reported Waymo result, the weight cannot be shared according to [Waymo Term](https://waymo.com/open/terms/). More detail discussion [issue 8](https://github.com/KTH-RPL/SeFlow/issues/8#issuecomment-2464224813).
|
| 23 |
+
- [SSF](https://arxiv.org/abs/2501.17821): Supervised learning long-range scene flow, model included is trained on Argoverse 2.
|
| 24 |
+
- [Flow4D](https://ieeexplore.ieee.org/document/10887254): Supervised learning 4D network scene flow, model included is trained on Argoverse 2. -->
|
| 25 |
|
| 26 |
The files we included and all test result reports can be found [v2 leaderboard](https://github.com/KTH-RPL/DeFlow/discussions/6) and [v1 leaderboard](https://github.com/KTH-RPL/DeFlow/discussions/2).
|
| 27 |
+
* [ModelName_best].ckpt: means the model evaluated in the public leaderboard page provided by authors or our retrained with the best parameters.
|
|
|
|
|
|
|
|
|
|
| 28 |
* [demo_data.zip](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip): 613Mb, a mini-dataset for user to quickly run train/val code. Check usage in [this section](https://github.com/KTH-RPL/SeFlow?tab=readme-ov-file#1-run--train).
|
| 29 |
* [waymo_map.tar.gz](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/waymo_map.tar.gz): to successfully process waymo data with ground segmentation included to unified h5 file. Check usage in [this README](https://github.com/KTH-RPL/SeFlow/blob/main/dataprocess/README.md#waymo-dataset).
|
| 30 |
|
|
|
|
| 33 |
<!-- <br> -->
|
| 34 |
You can try following methods in our code without any effort to make your own benchmark.
|
| 35 |
|
| 36 |
+
- [x] [SSF](https://arxiv.org/abs/2501.17821) (Ours π): ICRA 2025
|
| 37 |
+
- [x] [Flow4D](https://ieeexplore.ieee.org/document/10887254): RA-L 2025
|
| 38 |
- [x] [SeFlow](https://arxiv.org/abs/2407.01702) (Ours π): ECCV 2024
|
| 39 |
- [x] [DeFlow](https://arxiv.org/abs/2401.16122) (Ours π): ICRA 2024
|
| 40 |
- [x] [FastFlow3d](https://arxiv.org/abs/2103.01306): RA-L 2021
|
| 41 |
- [x] [ZeroFlow](https://arxiv.org/abs/2305.10424): ICLR 2024, their pre-trained weight can covert into our format easily through [the script](https://github.com/KTH-RPL/SeFlow/tools/zerof2ours.py).
|
| 42 |
- [ ] [NSFP](https://arxiv.org/abs/2111.01253): NeurIPS 2021, faster 3x than original version because of [our CUDA speed up](https://github.com/KTH-RPL/SeFlow/assets/cuda/README.md), same (slightly better) performance. Done coding, public after review.
|
| 43 |
- [ ] [FastNSF](https://arxiv.org/abs/2304.09121): ICCV 2023. Done coding, public after review.
|
|
|
|
| 44 |
- [ ] ... more on the way
|
| 45 |
|
| 46 |
</details>
|
| 47 |
|
| 48 |
+
## Cite Us
|
| 49 |
|
| 50 |
+
*OpenSceneFlow* is designed by [Qingwen Zhang](https://kin-zhang.github.io/) from DeFlow and SeFlow project. If you find it useful, please cite our works:
|
| 51 |
|
| 52 |
+
```bibtex
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
@inproceedings{zhang2024seflow,
|
| 54 |
author={Zhang, Qingwen and Yang, Yi and Li, Peizheng and Andersson, Olov and Jensfelt, Patric},
|
| 55 |
title={{SeFlow}: A Self-Supervised Scene Flow Method in Autonomous Driving},
|
|
|
|
| 59 |
organization={Springer},
|
| 60 |
doi={10.1007/978-3-031-73232-4_20},
|
| 61 |
}
|
| 62 |
+
@inproceedings{zhang2024deflow,
|
| 63 |
+
author={Zhang, Qingwen and Yang, Yi and Fang, Heng and Geng, Ruoyu and Jensfelt, Patric},
|
| 64 |
+
booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)},
|
| 65 |
+
title={{DeFlow}: Decoder of Scene Flow Network in Autonomous Driving},
|
| 66 |
+
year={2024},
|
| 67 |
+
pages={2105-2111},
|
| 68 |
+
doi={10.1109/ICRA57147.2024.10610278}
|
| 69 |
+
}
|
| 70 |
+
@article{zhang2025himu,
|
| 71 |
+
title={HiMo: High-Speed Objects Motion Compensation in Point Cloud},
|
| 72 |
+
author={Zhang, Qingwen and Khoche, Ajinkya and Yang, Yi and Ling, Li and Sina, Sharif Mansouri and Andersson, Olov and Jensfelt, Patric},
|
| 73 |
+
year={2025},
|
| 74 |
+
journal={arXiv preprint arXiv:2503.00803},
|
| 75 |
+
}
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
And our excellent collaborators works as followings:
|
| 79 |
+
|
| 80 |
+
```bibtex
|
| 81 |
+
@article{kim2025flow4d,
|
| 82 |
+
author={Kim, Jaeyeul and Woo, Jungwan and Shin, Ukcheol and Oh, Jean and Im, Sunghoon},
|
| 83 |
+
journal={IEEE Robotics and Automation Letters},
|
| 84 |
+
title={Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation},
|
| 85 |
+
year={2025},
|
| 86 |
+
volume={10},
|
| 87 |
+
number={4},
|
| 88 |
+
pages={3462-3469},
|
| 89 |
+
doi={10.1109/LRA.2025.3542327}
|
| 90 |
+
}
|
| 91 |
+
@article{khoche2025ssf,
|
| 92 |
+
title={SSF: Sparse Long-Range Scene Flow for Autonomous Driving},
|
| 93 |
+
author={Khoche, Ajinkya and Zhang, Qingwen and Sanchez, Laura Pereira and Asefaw, Aron and Mansouri, Sina Sharif and Jensfelt, Patric},
|
| 94 |
+
journal={arXiv preprint arXiv:2501.17821},
|
| 95 |
+
year={2025}
|
| 96 |
+
}
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
Feel free to contribute your method and add your bibtex here by pull request!
|