Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extend projective integrators to support arbitrary sensors #1163

Open
dvicini opened this issue May 6, 2024 · 2 comments
Open

Extend projective integrators to support arbitrary sensors #1163

dvicini opened this issue May 6, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@dvicini
Copy link
Member

dvicini commented May 6, 2024

Currently, the projective integrators explicitly disallow the use of any sensor that is not the standard perspective sensor:

if not sensor.__repr__().startswith('PerspectiveCamera'):

I wanted to use the batch sensor to cut down some Python overheads, but currently cannot due to this sensor Jacobian implementation being very much specialized to perspective sensors.

This isn't urgent, but it would be good to eventually support the batch sensor for projective sensors as well.

@dvicini dvicini added the enhancement New feature or request label May 6, 2024
@merlinND
Copy link
Member

Also discussed for orthographic in #1160.

@ejtwe
Copy link

ejtwe commented May 21, 2024

How is the function perspective_sensor_jacobian() derived? The calculation of J_num/J_den*multiplier is confusing.

def perspective_sensor_jacobian(self,
                                    sensor: mi.Sensor,
                                    ss: mi.SilhouetteSample3f):
        """
        The silhouette sample `ss` stores (1) the sampling density in the scene
        space, and (2) the motion of the silhouette point in the scene space.
        This Jacobian corrects both quantities to the camera sample space.
        """
        if not sensor.__repr__().startswith('PerspectiveCamera'):
            raise Exception("Perspective cameras are supported")

        to_world = sensor.world_transform()
        near_clip = sensor.near_clip()
        sensor_center = to_world @ mi.Point3f(0)
        sensor_lookat_dir = to_world @ mi.Vector3f(0, 0, 1)
        x_fov = mi.traverse(sensor)["x_fov"][0]
        film = sensor.film()

        camera_to_sample = mi.perspective_projection(
            film.size(),
            film.crop_size(),
            film.crop_offset(),
            x_fov,
            near_clip,
            sensor.far_clip()
        )

        sample_to_camera = camera_to_sample.inverse()
        p_min = sample_to_camera @ mi.Point3f(0, 0, 0)
        multiplier = dr.sqr(near_clip) / dr.abs(p_min[0] * p_min[1] * 4.0)

        # Frame
        frame_t = dr.normalize(sensor_center - ss.p)
        frame_n = ss.n
        frame_s = dr.cross(frame_t, frame_n)

        J_num = dr.norm(dr.cross(frame_n, sensor_lookat_dir)) * \
                dr.norm(dr.cross(frame_s, sensor_lookat_dir)) * \
                dr.abs(dr.dot(frame_s, ss.silhouette_d))
        J_den = dr.sqr(dr.sqr(dr.dot(frame_t, sensor_lookat_dir))) * \
                dr.squared_norm(ss.p - sensor_center)

        return J_num / J_den * multiplier

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants