You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for sharing your great work.
As title said, I'm wondering why the result from 3D grounding task is 2D, instead of 3D box.
3D-LLM was evaluated on ScanQA dataset, which normally expects 3D location and the answer to a question for a specific object as output.
Is this a just visualization example of 2D box? I wonder if the 3D-LLM can estimate the 3D box as its output.
Thanks for your answer in advance!
The text was updated successfully, but these errors were encountered:
Thanks for sharing your great work.
As title said, I'm wondering why the result from 3D grounding task is 2D, instead of 3D box.
3D-LLM was evaluated on ScanQA dataset, which normally expects 3D location and the answer to a question for a specific object as output.
Is this a just visualization example of 2D box? I wonder if the 3D-LLM can estimate the 3D box as its output.
Thanks for your answer in advance!
The text was updated successfully, but these errors were encountered: