Skip to content

interoperability and data exchange with Mitsuba and PyTorch #935

Answered by njroussel
lesphere asked this question in Q&A
Discussion options

You must be logged in to vote

In fact, I want to record values like positions of intersections, incoming and outgoing directions for each bounce to generate a training dataset. The value for each depth of each ray forms a sample in the dataset.

This should be possible by filling the aov field in the Integrator's sample method. However, unless you're rendering one ray/sample per pixel, the output AOVs will contain the average of all rays of that pixel. This most likely meaningless for something like the incoming/outgoing directions.
If you actually want to store this information per ray, my recommendation would be to move away from the traditional Mitsuba rendering pipeline and instead script your own utility pipelin…

Replies: 1 comment 4 replies

Comment options

You must be logged in to vote
4 replies
@lesphere
Comment options

@njroussel
Comment options

Answer selected by lesphere
@lesphere
Comment options

@njroussel
Comment options

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants