Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Validate multivariate dispersions in Keplerian state space #339

Open
ChristopherRabotin opened this issue Jul 19, 2024 · 2 comments
Open

Comments

@ChristopherRabotin
Copy link
Member

High level description

Nyx uses the hyperdual representation of an orbit to compute the partial derivatives of orbital elements with respect to the Cartesian elements. At the moment, the multivariate unit tests attempt to validate this in the disperse_keplerian and disperse_raan_only unit tests. However, these tests don't meet the success criteria that's coded up: maybe the criteria is wrong?

I have high confidence that the computation of the partials of the orbital elements with respect to the Cartesian state is correct for several reasons:

  1. They are used to rotate the Kalman filter covariance from an inertial frame into the Keplerian state space, and covariances meet exactly the Monte Carlo runs in the 02_jwst test case, thereby validating that the state space transformation into the Keplerian space is correct;
  2. They are used in the hyperdual Newton Raphston targeting -- although that isn't as precise as the other approach (cf the AAS21-715 paper);

I'm also reasonably confident that the multivariate distribution is correctly implemented for these reasons:

  1. The code matches the numpy implementation, and I have manually confirmed that the results are on par (albeit different) than what numpy returns for the same inputs into the SVD function;
  2. The MVN distribution works exactly as expected when dispersing a full Cartesian state or when dispersing over RMAG.

This leads me to believe that the issue may lie in the validity of the pseudo inverse of the Jacobian, or in the test itself. The test counts how many of the distributions are out of the 3-sigma bounds, and expects these to be around 0.3% of the 1000 samples. It does not work.

Test plans

  • Find papers on how to correctly validate the implementation of an MVN in a different state space
@ChristopherRabotin
Copy link
Member Author

Multivariate dispersions after a state space transformation form an N dimensional ellipsoid. Therefore, the 3 sigma bounds test may not be entirely valid.

A better approach is to compute the covariance post dispersion, and check that it matches the input covariance on the items that were to be dispersed.

use nalgebra::{DMatrix, DVector, Cholesky};
use rand::prelude::*;
use rand_distr::StandardNormal;

fn multivariate_normal(mean: &DVector<f64>, cov: &DMatrix<f64>, num_samples: usize) -> Vec<DVector<f64>> {
    // Step 1: Cholesky decomposition of the covariance matrix
    let chol = Cholesky::new(cov.clone()).expect("Covariance matrix is not positive definite");
    let l = chol.l();
    
    // Step 2: Generate samples from a standard normal distribution
    let mut rng = rand::thread_rng();
    let standard_normal_samples: Vec<DVector<f64>> = (0..num_samples)
        .map(|_| {
            DVector::from_iterator(mean.len(), (0..mean.len()).map(|_| rng.sample(StandardNormal)))
        })
        .collect();
    
    // Step 3: Transform the standard normal samples
    let transformed_samples: Vec<DVector<f64>> = standard_normal_samples
        .iter()
        .map(|sample| l * sample)
        .collect();
    
    // Step 4: Add the mean vector to the transformed samples
    let samples: Vec<DVector<f64>> = transformed_samples
        .iter()
        .map(|sample| sample + mean)
        .collect();
    
    samples
}

fn calculate_sample_covariance(samples: &Vec<DVector<f64>>, mean: &DVector<f64>) -> DMatrix<f64> {
    let num_samples = samples.len() as f64;
    let mut covariance = DMatrix::zeros(mean.len(), mean.len());
    
    for sample in samples {
        let diff = sample - mean;
        covariance += &diff * diff.transpose();
    }
    
    covariance / num_samples
}

fn main() {
    // Example mean vector and covariance matrix
    let mean = DVector::from_vec(vec![1.0, 2.0, 3.0]);
    let cov = DMatrix::from_row_slice(3, 3, &[
        1.0, 0.5, 0.2,
        0.5, 1.0, 0.3,
        0.2, 0.3, 1.0
    ]);
    let num_samples = 1000;

    // Generate samples
    let samples = multivariate_normal(&mean, &cov, num_samples);

    // Calculate the sample mean
    let sample_mean = samples.iter().fold(DVector::zeros(mean.len()), |acc, sample| acc + sample) / num_samples as f64;
    println!("Sample mean:\n{}", sample_mean);

    // Calculate the sample covariance matrix
    let sample_cov = calculate_sample_covariance(&samples, &sample_mean);
    println!("Sample covariance:\n{}", sample_cov);

    // Compare the sample covariance matrix to the original covariance matrix
    println!("Original covariance matrix:\n{}", cov);

    let cov_diff = (sample_cov - cov).norm();
    println!("Difference between sample covariance and original covariance: {}", cov_diff);
}

Screenshot_20240720_225526_Perplexity

@ChristopherRabotin
Copy link
Member Author

This work should also make a simple but important change: if there is only one input variable and that is a settable parameter, then the draw should be single variate. This may lead to renaming the MultivariateNormal structure to RandomVariable or something with a more representative name.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant