Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

allow more data conversion options #4

Open
wants to merge 19 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
**__pycache__**
**build**
**egg-info**
**dist**
argo2kitti.py
31 changes: 16 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@
# argoverse-kitti-adapter
A toolbox to translate [Argoverse dataset (CVPR2019)](https://www.argoverse.org/data.html) into [KITTI dataset (CVPR2012)](http://www.cvlibs.net/datasets/kitti/) format for perception/tracking tasks.

- Author: Yiyang Zhou
- Contact: yiyang.zhou@berkeley.edu
- Author: Yiyang Zhou, Di Feng
- Contact: yiyang.zhou@berkeley.edu, di.feng@berkeley.edu

## Introduction
This toolbox is to translate Argoverse dataset from CVPR2019 into the KITTI dataset format. The major changes are:
Expand Down Expand Up @@ -34,7 +34,7 @@ argodataset
- To use the toolbox, please follow the instructions in the [argoverse github repository](https://github.com/argoai/argoverse-api/tree/16dec1ba51479a24b14d935e7873b26bfd1a7464) to install the corrsponding python API.

### 3. Clone the Argoverse-kitti-adapter Repo
'''git clone https://github.com/yzhou377/argoverse-kitti-adapter.git'''
```git clone https://github.com/yzhou377/argoverse-kitti-adapter.git```
- Once the data and the API are well equipped, please open the 'apater.py' file for changing your root_dir (the directory to your argoverse-tracking folder). The toolbox will automatically construct a new folder (train_kitti) for you. The new file structure is shown as below:

```
Expand All @@ -43,32 +43,33 @@ argodataset
└── train <-------------------------------------data_dir
└──5c251c22-11b2-3278-835c-0cf3cdee3f44
└──...
└── train_kitti <-------------------------------goal_dir
└──velodyne
└──iamge_2
└──calib
└──label_2
└──velodyne_reduced <-----------------------empty folder
└── ...
└── argoverse-kitti <---------------------------goal_dir
└── training
└──velodyne
└──iamge_2
└──calib
└──label_2
└── statistics <----------------------------dataset statistics
└── Imagesets <-----------------------------dataset split
```

### 4. Change Hyperparameters
- On the top of the adapter.py file, please change the root directory and the distance threshold.

### 5. Run the Adapter
- After changing the configruation file, please run the adapter.py file using the following commands
"""python adapter.py"""
```python adapter.py```

## Note
1. Frequency and Sychronization
- In KITTI, the camera and the LIDAR are synchronized at 10Hz. However, in Argoverse, the ring cameras are running at 30Hz, while the LIDAR is running at 10Hz. To fully realize the KITTI dataset format, we match each LIDAR scan with the corresponding camera at the closest timestamp. As a result, the sensor combo in the modified KITTI version of Argoverse is still running at 10Hz.
2. Multi-camera
- In KITTI dataset, each image is matched up with one LIDAR scan, one label file, and one calibration document. However, in Argoverse, seven images share one LIDAR scan, and one log only has one single label/calibration combo. Using only the ring cameras, the LIDAR file is copied 7 times to match with each image, and corresponding label/calibration files are generated as well.
3. Labelling File Clips
- KITTI only labels the object in the view of the front camera, while Argoverse, given its panoramic nature, labels all the obstacles around the object. Thus, for each associated labelling file, if the object is not seen in this specific image, then it is not labelled. Furthermore, objects that are too small (beyond 50m) were not labelled. One can cetrainly change this threshold in the ['apater.py'](https://github.com/yzhou377/argoverse-kitti-adapter/blob/master/adapter.py). [Here](https://github.com/yzhou377/argoverse-kitti-adapter/blob/master/supplementals/KITTI_README) attaches the KITTI label README file . For the Argoverse label file, please go check the [Argoverse github](https://github.com/argoai/argoverse-api/tree/16dec1ba51479a24b14d935e7873b26bfd1a7464)

2. Labelling File Clips
- KITTI only labels the object in the view of the front camera, while Argoverse, given its panoramic nature, labels all the obstacles around the object. Thus, for each associated labelling file, if the object is not seen in this specific image, then it is not labelled. Furthermore, objects that are too small (beyond 70m) were not labelled. One can cetrainly change this threshold in the [`apater.py`](https://github.com/frankfengdi/argoverse-kitti-adapter/blob/master/adapter.py). [Here](https://github.com/frankfengdi/argoverse-kitti-adapter/blob/master/supplementals/KITTI_README) attaches the KITTI label README file . For the Argoverse label file, please go check the [Argoverse github](https://github.com/argoai/argoverse-api/tree/16dec1ba51479a24b14d935e7873b26bfd1a7464)
4. Calibration File
- To match the KITTI calibration file, the tool is designed to combine the 'R0_rect' matrix together with the 'P2' matrix to form intrinsic matrix 'K' of the camera. In the new label file, 'R0_rect' is set to be an identity matrix, while 'P2' contains all the intrinsics.

## Reference
- [1] M. Chang et al., Argoverse: 3D Tracking and Forecasting with Rich Maps, CVPR2019, Long Beach, U.S.A
- [2] A. Geiger et al., Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite, CVPR2012, Rhode Island, U.S.A
- [3] Y. Wang et al., Train in Germany, Test in The USA: Making 3D Object Detectors Generalize, CVPR2020, Seatle, U.S.A
Loading