-
Hi, I'd like to restore some intermediate values of render_backward() to train a neural network, but I don't know how to do the data exchange with jit on. In details, in src\python\python\ad\integrators\prb.py PRBIntegrator.sample(), there's a loop do something like ray tracing. In adjoint mode, this loop computes derivatives in a way like ray tracing. I use aovs to record some extra data. But I don't know how to restore the data of different threads together to generate a training dataset (in the context of machine learning). Here is an example modified from the original code of Mitsuba 3: def sample(..., aovs, ...):
...
while loop(active):
...
# there are positions of intersections, incoming and outgoing directions
# and some other values in aovs
aovs.append(...)
...
...
def render_backward(...):
...
aovs = []
sample(..., aovs, ...)
# How to put the data in aovs into the training dataset
# that can be used to train a NN later In main.py: mi.set_variant('cuda_ad_rgb')
for it in range(...):
image = mi.render()
loss = mse(image, ...)
dr.backward(loss)
# use the data generated in last frame to train one epoch Suppose that type(aovs[0]) == mi.Point3f, can I achieve that by something like below? pos_flat = dr.ravel(aovs[0])
pos_tensor = mi.TensorXf(pos_flat, ...)
dataset.insert_pos(pos_tensor) If so, how can I get the dataset reference in render_backward()? Besides, which part of the code will be compiled into a kernel, which statements will be ignored, and how do each statement work finally? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 4 replies
-
Hi @lesphere I'm not quite sure I understand what you are trying to achieve. There is no "data" in AOVs in |
Beta Was this translation helpful? Give feedback.
This should be possible by filling the
aov
field in theIntegrator
'ssample
method. However, unless you're rendering one ray/sample per pixel, the output AOVs will contain the average of all rays of that pixel. This most likely meaningless for something like the incoming/outgoing directions.If you actually want to store this information per ray, my recommendation would be to move away from the traditional Mitsuba rendering pipeline and instead script your own utility pipelin…