Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Hemispherical distant sensor plugin #11

Draft
wants to merge 12 commits into
base: master
Choose a base branch
from
Draft

[Feature] Hemispherical distant sensor plugin #11

wants to merge 12 commits into from

Conversation

leroyvn
Copy link
Contributor

@leroyvn leroyvn commented Jun 15, 2021

Description

This PR adds a hemispherical distant sensor, which is similar to an adjoint to the envmap emitter. It records radiance leaving the scene in a hemisphere. It can be useful to get a qualitative idea of the effective BRDF of a surface with complex geometry.

Code is mostly taken from the distant sensor plugin, and I don't know to what extent it would be desirable and/or doable to move common parts to a library header (in the end, only the constructor and direction sampling code are different).

An example of RGB output (a simple rectangular surface with a roughconductor BSDF; coloured dots each correspond to a directional emitter in the scene; note that pixel index origin is taken at the bottom left of the image):

Screenshot 2021-06-15 at 12 29 13

To do

  • Improve docs with a example of output (also add up direction on it) and a schematic to explain film-hemisphere mapping
  • Update docs with actual test scene
  • Add tests

Testing

A set of tests similar to those written for distant is included. We test for object construction, ray sampling and overall result correctness in a few scenarios.

Checklist

  • My code follows the style guidelines of this project
  • My changes generate no new warnings
  • My code also compiles for cuda_* and llvm_* variants. If you can't test this, please leave below
  • I have commented my code
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works
  • I cleaned the commit history and removed any "Merge" commits
  • I give permission that the Mitsuba 2 project may redistribute my contributions under the terms of its license

@Speierers
Copy link
Member

Thanks for this PR @leroyvn ,

Would it make sense for hdistant to have a ref<DistantSensor> m_child member that you could use to call the eval and pdf routine? Then all you would have to do is redefine the sampling methods.

On the other hand, this is not a lot of code, I don't think it would be worth it to share some of this in a header file.

@leroyvn
Copy link
Contributor Author

leroyvn commented Jun 16, 2021

Would it make sense for hdistant to have a ref<DistantSensor> m_child member that you could use to call the eval and pdf routine? Then all you would have to do is redefine the sampling methods.

I hadn't thought of that: actually I think these two methods will be different from the distant implementation. Then it probably makes more sense to keep those separate. Side note: Shall I implement these as well? They make a lot of sense in the context of using those distant sensors with a light tracing integrator.

@Speierers
Copy link
Member

Okay then, let's keep those seperate 👍

It would be great to have them yes. Otherwise we will need to revisite those plugins once again when the light tracer has landed.

@leroyvn
Copy link
Contributor Author

leroyvn commented Jun 16, 2021

I looked into it and it seems that the DirectionSample3f struct is currently only fit for emitter sampling. I guess the emitter member should become endpoint, but this would be beyond the scope of the PR (and overlap with the light tracer work). Leaving the emitter field unset seems however not so clean to me...

@Speierers
Copy link
Member

Makes sense. @merlinND is this something you encountered in your work on the light tracer?

@leroyvn
Copy link
Contributor Author

leroyvn commented Jun 22, 2021

Hi @Speierers, I just pushed an update with tests and docs. It now requires a commit on the data submodule for which I'll submit a PR. The documentation notably explains how to properly orient the film (see below, please tell me if it's not clear).

Screenshot 2021-06-22 at 16 57 58

@Speierers
Copy link
Member

Hi @leroyvn,

Great illustration for the doc! I am just a little confused with the targer, origin, up label. IMO it is hard to read what they correspond to on the illustration. Also on the bottom row, does the orange arrow represent the up vector?

@leroyvn
Copy link
Contributor Author

leroyvn commented Jun 23, 2021

I am just a little confused with the targer, origin, up label. IMO it is hard to read what they correspond to on the illustration.

I'll make a new proposal, hopefully clearer.

Also on the bottom row, does the orange arrow represent the up vector?

Yes, and the direction pointed by target - origin is the "bullseye" at the centre (I used something close to the conventions used in technical/industrial drawing).

@leroyvn
Copy link
Contributor Author

leroyvn commented Jun 23, 2021

Maybe something like this?

Screenshot 2021-06-23 at 10 45 33

@leroyvn
Copy link
Contributor Author

leroyvn commented Jun 23, 2021

So this is what we have now, with some text explaining what the orange markers on the plots are about.

Screenshot 2021-06-23 at 11 09 39

@tomasiser
Copy link
Member

Hi there! I'll sneak in to this discussion 😋 I like the illustrations, but I am a bit confused! Is the hemisphere located above the blue plane in the target direction? Is it recording light that was emitted and then bounced from the blue glossy plane? Or is it radiance that left the emitter directly? And where do the light sources actually point - or are they point light sources?

@leroyvn
Copy link
Contributor Author

leroyvn commented Jun 23, 2021

Hi @tomasiser, it's indeed clearer with the full docs. The sample scene has three directional emitters which illuminate a rectangular patch with a diffuse, non-absorbing BRDF. The sensor records exitant radiance (in that case, reflected by the surface, but it can be something more complex) in a (infinitely distant) hemisphere pointed by the yellow vector on the schematic. So in this example, it is "above" the blue plane. This vector is set by the target and origin params of a look-at transform passed to the to_world param of the sensor, and it is independent from the reflecting shape.

hdistant is an extension to the distant sensor, which itself is the adjoint to the directional sensor. You can see hdistant as the adjoint to the envmap emitter (although they do not cover exactly the same angular region).

In practice, hdistant is useful to get a visual representation of the reflectance pattern on a complex surface. You must illuminate the scene appropriately, i.e. with a directional emitter, in order to retrieve meaningful reflectance values (see the reference paper by Nicodemus (1973) for a thorough introduction to surface reflectance measurement principles). Its weaknesses are:

  • no importance sampling;
  • no control on the positioning of sampled directions (the pixel grid won't align with a particular direction and the film will be sampled continuously).

Parametrising distant sensors is a good way to address these limitations.

@Speierers
Copy link
Member

@leroyvn I like very much the new illustration :)

std::tie(ray, ray_weight) = sample_ray(
time, wavelength_sample, film_sample, aperture_sample, active);

// Since the film size is always 1x1, we don't have differentials
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is that really true? The example shows rendered 2D images with spatial variation. The sample_ray_differential function should track how adjacent rays change either (or both) in their 'o' and 'd' parameters.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No it's not, sorry for this mistake! I started from the distant code and forgot to update that part, I'll check that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added ray differentials using the same code as for the main ray direction and origin (basically skipping spectrum-related parts), I hope it's fine.

@wjakob wjakob force-pushed the master branch 8 times, most recently from d43e297 to 8bc5ae2 Compare September 13, 2021 13:21
@wjakob
Copy link
Member

wjakob commented Oct 21, 2021

Hi Vincent,

I just took another look at this PR and thought a bit more about the 'shape' target feature common to this sensor and 'distant.cpp'.

A few thoughts:

  • Having this shape feature seems reasonable, and I can understand why it is needed. However, the documentation could be improved. You could explain some of the subtleties and mention that it only makes sense to use truly flat surfaces here.

  • Expanding into a specialized template-based implementation of the emitter based on a property is warranted in some very rare use cases (like having a bitmap that could be monochromatic or spectral, i.e. the underlying representation changes completely). It makes the code quite a bit more complex than is warranted here, and I don't think you are really gaining any performance improvements. Could you convert this into a single class that uses if instead of if constexpr (Template Parameter) in the sampling routine?

  • I don't think I fully understand why the sample_ray_differential is so complicated (using another method sample_ray_dir_origin that may sample the shapes multiple times altogether.) Can't we use the single ray origin position for o_x and o_y and just shift the direction?

Thanks,
Wenzel

@leroyvn
Copy link
Contributor Author

leroyvn commented Nov 19, 2021

Hi @wjakob, sorry it took me a while to get back to this. I tried to address your comments:

  • I rolled back to a template-free implementation;
  • I added a warning to the docstring so as to make it clear to users that they should basically use rectangles and disks to control ray targets, although other fancy flat surfaces would work;
  • I rewrote the sample_ray_differential() method: shape sampling is now done only once.

If this is fine, I'll also propagate the documentation update to distant (alongside some cleanup and updates we discussed last time we talked).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants