Replies: 4 comments 4 replies
-
Hi @devel4848 Are you using the CUDA or LLVM backend? This is something we don't really implement ourselves, we rely on Optix/Embree to handle the acceleration structure and instancing for us. Maybe we're misusing their API. To be honest, I don't think we've made use of the instancing feature recently (or ever) ourselves. So, I don't have any expectations and our code might have a bug. Intuitively, I agree with you, I find your results surprising. |
Beta Was this translation helpful? Give feedback.
-
Hello @njroussel. I used the |
Beta Was this translation helpful? Give feedback.
-
I observed this behaviour in a similar situation (mesh-based trees, a couple of thousand instances, scalar variant). |
Beta Was this translation helpful? Give feedback.
-
Hi,@njroussel I use the |
Beta Was this translation helpful? Give feedback.
-
I did some experiments with shapegroup instances with Mitsuba 3 Python API. I rendered the same scene with two different representations. The scene is a forest made of identical trees. One tree is made of a trunc and a lot of identical leaves.
In the first representation I used few instances of a few big meshes : one mesh for the trunc, one big mesh for the whole leaves. Each tree is made of one instance of the trunc and one instance of the whole-foliage-single-mesh.
In the second representation I used a lot of instances of small meshes : one mesh for the trunc, one small mesh for each leaf. Each tree is made of one instance of the trunc and ~9000 instances of the one-leaf-mesh.
In the two representations I loaded the meshes only once and created one shapegroup per mesh.
I measured the RAM used to load and render the scene. I noticed that the 2nd representation (lot of instances of small meshes) used a huge amount of RAM compared to the 1st representation (few instances of big meshes). For example 100 trees with the 2nd representation used 4.4GB of RAM, whereas ~60000 trees with the 1st representation used 2.8GB of RAM.
I'm surprised how costly instances are with RAM usage. Is it the expected result or is it a sign that there is something wrong with my implementation?
Beta Was this translation helpful? Give feedback.
All reactions