-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about the synthetic datasets #36
Comments
Hi, for 1-2, the For 3, from my observation, if camera viewpoints are initialized as all identity rotations, the final camera viewpoint will only cover 90-180 over 360 degrees on a circle even after optimization. I don't have numbers but these will look very bad. For 4, I'm trying to release all of those in the future (possibly after this cvpr deadline). Please send me an email if you need it at an early date. |
Thanks for the answer! I used the latest main to run an optimization on the Eagle dataset without any modification in the code base. However, Eagle's head is missing in the reconstruction. Do you know how to get the same reconstruction results here? I followed the same steps in your instructions for data processing, optimization, and evaluation... Is there anything that I need to change in the code to achieve the same quality as your results? My results are shown below: a-eagle-.0.-all.mp4
|
Hi, the results look really strange. Both the camera pose and deformation seem to be off a lot. Before I get resource to reproduce it, it would help if you can verify a couple of things. Are you able to get reasonable results for the cat videos? Could you post the results of drawing root pose trajectory here? |
I can get reasonable results for the cat videos. This is the result I got after running the command mesh-cam.mp4 |
I reverted back to this version of the repository and ran Eagle optimization again (Basically without your latest changes on eikonal loss). The results improved but the head of the eagle is still missing and there is still a big gap between this and the result on the paper and website. a-eagle-.0.-all1.mp4mesh-cam1.mp4
|
Interesting that eikonal loss made it worse for eagle. I also noticed an error in the doc
which should be querying the model after all the training stages (...-ft2)
But assuming you've already querying the model from |
It happens in the original paper as well when surface is not well estimated. The motivation of adding eikonal loss is to reduce such artifacts. Perhaps freezing the root in the first stage + eikonal loss will further improve it. |
I see. Thanks so much for your help! Besides the minor typo you mentioned earlier, I think the evaluation command in your doc is missing a |
Hi Gengshan,
Thanks for the great work!
I have a couple of questions regarding the synthetic datasets (Eagle and Hands) and the other results on your website:
The instructions on synthetic datasets use the ground truth camera poses in training. However, the paths to the
rtk
files are commented out in the config. If I directly use this config, it won't use the ground truth camera poses in the training right?I followed the same instructions for Eagle dataset preparation, but it does not save the
rtk
files to the locations specified in the config, should I manually change the paths?Have you tried running BANMo optimizations on Eagle and Hands without the ground truth camera poses? And if so, how's the result visually and quantitatively (in terms of Chamfer Distance and F-scores)?
I noticed that you have results of more objects such as Penguins, Robot-Laikago etc. on your website. Do you know where I can get access to these datasets as well?
The text was updated successfully, but these errors were encountered: