Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions #27

Open
xbq1994 opened this issue Sep 13, 2023 · 4 comments
Open

Some questions #27

xbq1994 opened this issue Sep 13, 2023 · 4 comments

Comments

@xbq1994
Copy link

xbq1994 commented Sep 13, 2023

Hi, when I run the code

python -m torch.distributed.run --nproc_per_node=8 train.py --cfg-path lavis/projects/blip2/train/3dvqa_ft.yaml

there are some questions:

  1. I downloaded the weights locally from "https://huggingface.co/facebook/opt-2.7b" and replaced 'opt_model' in the code with the local weight file, but it shows that the weight and model sizes don't match.

  2. What directory should I place the downloaded dataset in?

  3. I found that the three annotations files in 3dvqa_ft.yaml do not exist. How can I obtain them?

train:
  storage: ./examples/all_refer_questions_train.json
test:
  storage: ./examples/all_refer_questions_val.json
val:
  storage: ./examples/all_refer_questions_val.json
@evelinehong
Copy link
Collaborator

  1. The codes in this repo use flant5. The weights should be automatically downloaded with the scripts.
  2. self.feature dir should be "features", and self.voxel dir should be "points" from the scene data
  3. replace the files with the json file in the google drive

We will push another version to fix these path mismatch

@jiuyouyun9
Copy link

  1. The codes in this repo use flant5. The weights should be automatically downloaded with the scripts.
  2. self.feature dir should be "features", and self.voxel dir should be "points" from the scene data
  3. replace the files with the json file in the google drive

We will push another version to fix these path mismatch

For "3. replace the files with the json file in the google drive", json files in the google drive are 'data_part1_all_objaverse.json' and 'data_part2_scene.json'. In 3dvqa_ft.yaml, there are train, val, and test.
Do I need to split the 'data_part1_all_objaverse.json' and 'data_part2_scene.json' into train/val/test sets in proportion of (8:1:1)?

@xbq1994
Copy link
Author

xbq1994 commented Sep 19, 2023

  1. The codes in this repo use flant5. The weights should be automatically downloaded with the scripts.
  2. self.feature dir should be "features", and self.voxel dir should be "points" from the scene data
  3. replace the files with the json file in the google drive

We will push another version to fix these path mismatch

Thanks! Do I need to split the 'data_part1_all_objaverse.json' and 'data_part2_scene.json' into train/val/test sets in proportion of (8:1:1)? I found you have uploaded files "voxelized_features_sam_nonzero_preprocess.zip" and "voxelized_voxels_sam_nonzero_preprocess.zip", what's that for?

@cazhang
Copy link

cazhang commented Feb 29, 2024

  1. The codes in this repo use flant5. The weights should be automatically downloaded with the scripts.
  2. self.feature dir should be "features", and self.voxel dir should be "points" from the scene data
  3. replace the files with the json file in the google drive

We will push another version to fix these path mismatch

Thanks! Do I need to split the 'data_part1_all_objaverse.json' and 'data_part2_scene.json' into train/val/test sets in proportion of (8:1:1)? I found you have uploaded files "voxelized_features_sam_nonzero_preprocess.zip" and "voxelized_voxels_sam_nonzero_preprocess.zip", what's that for?

Are you using the voxel point cloud as input? I thought the paper was using continuous representation.
For voxel, how the discretization is done?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants