Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

APT Loss Functions #58

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open

APT Loss Functions #58

wants to merge 7 commits into from

Conversation

smericks
Copy link
Collaborator

Adding diagonal and full covariance APT loss functions for SNPE training with paltas. Includes test cases.

@smericks smericks marked this pull request as draft June 11, 2024 23:32
@smericks smericks marked this pull request as ready for review July 1, 2024 21:45
@coveralls
Copy link

Coverage Status

coverage: 85.658% (-9.3%) from 94.973%
when pulling 0967a26 on smericks:snpe
into 5405f4a on swagnercarena:main.

Update point source test to have a consistent ab_zeropoint, ensure bright ps to offset how bright the source at low redshift is.
- Consistent source/ps center_x,center_y
- Up the magnitude of the PS
Copy link
Owner

@swagnercarena swagnercarena left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These changes look excellent, I've just added a few minor comments (mostly about tests).

"""Helper function to convert mu, prec_matrix to normalized parameter
space

Args:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not familiar with this string formatting. Does that shape information translate to the read the docs?

Remove the indentation from Returns (should have same indentation as Args)

@@ -120,6 +120,37 @@ def unnormalize_outputs(input_norm_path,learning_params,mean,standard_dev=None,
if cov_mat is not None:
cov_mat[:,lpi,:] *= param_std
cov_mat[:,:,lpi] *= param_std

# TODO: write test after moving (make sure identity operation w/ unnormalized)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you've added a test for this now so please remove the TODO.

self.proposal_prec = tf.constant(proposal_prec,dtype=tf.float32)

@staticmethod
def log_gauss_full(y_true,y_pred,prec_mat):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the same function as the class you're inheriting? If so there's no need to redefine it. Or am I missing an important detail?

return tf.reduce_min(loss_stack,axis=-1)


class DiagonalCovarianceAPTLoss(DiagonalCovarianceLoss):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think if you inherit from FullCovarianceAPTLoss you don't have to repeat the init

norm_dict = Analysis.dataset_generation.normalize_outputs(metadata,
learning_params,input_norm_path)

# Actual mean/precision matrix output by a network for these params
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The formatting of the spacing looks odd here


# create APT loss object
input_norm_path = 'test_data/apt_norms.csv'
#'/Users/smericks/Desktop/StrongLensing/STRIDES14results/sep7_narrow_lognorm/lr_1e-3/norms.csv'
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Vestigial comment?

mu_prior, prec_prior, mu_prop,prec_prop,input_norm_path=input_norm_path)
# move to normalized space
# MU_PRIOR, PREC_PRIOR, MU_PROP, PREC_PROP ALREADY MODIFIED B/C PASS BY REFERENCE
# (TODO: change how this is handled to avoid further issues)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you fixed this

truth1 = tf.constant(truth1,dtype=tf.float32)
truth1_batched = tf.squeeze(tf.stack([truth1,truth1]))

# CONFIRMED: the problem is NOT the prefactor
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this comment still relevant?

# Add point source and validate output
self.c.sample['point_source_parameters'] = {'x_point_source':0.001,
'y_point_source':0.001,'magnitude':22,'output_ab_zeropoint':25.95,
self.c.sample['point_source_parameters'] = {'x_point_source':0.0,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you change this?

@@ -0,0 +1,9 @@
parameter,mean,std
main_deflector_parameters_theta_E,0.7992458031844867,0.15239538119453927
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm guessing this is pulled from an actual norm file. If you manually change the values to be more exact (0.8, 0.15, etc.) could be easier for testing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants