get the radiance at a single surface point #1290
-
Hi, everyone. sampler = mi.load_dict({'type': 'independent', 'sample_count': 16})
integrator = mi.load_dict({'type': 'path'}) Since the accurate outgoing radiance need a lot of rays which have bad performance in CPU, I tried to use ray batch with drjit to accelerate it (with cuda). However, the problem is that, the ray batch returns the same radiance for each ray in this batch. I did a single test with the following code: sampler = mi.load_dict({'type': 'independent', 'sample_count': 16})
integrator = mi.load_dict({'type': 'path'})
offset = mi.Point3f(0.5, 0.5, 0.5)
origin = mi.Point3f(3, 0.5, 0.5)
direction = mi.Vector3f(-1, 0, 0)
# normalize direction
normalized_direction = dr.normalize(direction)
# single ray
ray = mi.Ray3f(origin, normalized_direction)
for i in range(10):
radiance_single = integrator.sample(scene, sampler, ray)
print(f"single radiance: {radiance_single}")
# ray batch,with the same origin and direction
ray_count = 10
origins = dr.tile(origin, ray_count)
directions = dr.tile(normalized_direction, ray_count)
rays = mi.Ray3f(origins, directions)
radiance_batch = integrator.sample(scene, sampler, rays)
# result output
print(f"batch radiance: {radiance_batch[0]}") And the result is:
One can be observed is that, single ray can show random results in each integrator.sample(), but the ray batch returns the same radiance. So, how can I make the ray batch return random results following random walking. Or, Is there any other way to quickly obtain the accurate outgoing radiance (with a lot of ray samples) at a surface intersection point? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hello @boringfish, When using By first rendering 1 ray at a time, I assume that you have seeded the sampler with a wavefront size of 1. Also, don't forget to schedule the state of the sampler after each call to Here is a fixed example: import mitsuba as mi
import drjit as dr
mi.set_variant("cuda_ad_rgb")
scene = mi.load_dict(mi.cornell_box())
sampler = mi.load_dict({'type': 'independent', 'sample_count': 16})
integrator = mi.load_dict({'type': 'path'})
# origin = mi.Point3f(3, 0.5, 0.5)
origin = mi.Point3f(0, 0, 3.90)
direction = mi.Vector3f(0, 0, -1)
# normalize direction
normalized_direction = dr.normalize(direction)
# --- single ray
sampler.seed(1234, wavefront_size=1)
ray = mi.Ray3f(origin, normalized_direction)
for i in range(10):
radiance_single = integrator.sample(scene, sampler, ray)
sampler.schedule_state() # <- important!
print(f"single radiance: {radiance_single}")
# --- ray batch,with the same origin and direction
ray_count = 10
origins = dr.tile(origin, ray_count)
directions = dr.tile(normalized_direction, ray_count)
rays = mi.Ray3f(origins, directions)
sampler.seed(1234, wavefront_size=ray_count)
radiance_batch = integrator.sample(scene, sampler, rays)
sampler.schedule_state() # <- important!
# result output
print(f"batch radiance: {radiance_batch[0]}") |
Beta Was this translation helpful? Give feedback.
Hello @boringfish,
When using
mi.render()
, thesampler
is automatically seeded with the correct wavefront size.In your test script, you have used
integrator.sample()
directly, which does not take care of seeding for you.By first rendering 1 ray at a time, I assume that you have seeded the sampler with a wavefront size of 1.
This means that from that point on, the sampler will generate only 1 random number. You can check this with
sampler.next_1d()
.When you switch to a bundle of
ray_count
, that 1 random number gets broadcaast (copied) over allray_count
lanes, which is why you get the same result for each lane.Also, don't forget to schedule the state of the sampler after each call to
i…