Skip to content
Philip Cook edited this page May 15, 2024 · 13 revisions

Longitudinal Cortical thickness

ANTs longitudinal cortical thickness

Current approach, script recommended by Nick. Example call:

module load ANTs
export ANTSPATH=/appl/ANTs-2.3.5/bin
ROOT=$PWD/testlong
export ROOT
if [ $# -gt 0 ]; then
        SUB=$1
        shift
   $ANTSPATH/antsLongitudinalCorticalThickness.sh \
   -d 3 \
   -o $ROOT/$SUB/T1_WB_long \
   -c 0 \
   -g 1 \
   -y 1 \
   -t /project/wolk/ADNI_longitudinal-Templates/Normal/T_template0_BrainCerebellum.nii.gz \
   -e /project/wolk/ADNI_longitudinal-Templates/Normal/T_template0.nii.gz \
   -m /project/wolk/ADNI_longitudinal-Templates/Normal/T_template0_BrainCerebellumProbabilityMask.nii.gz \
   -f /project/wolk/ADNI_longitudinal-Templates/Normal/T_template0_BrainCerebellumExtractionMask.nii.gz \
   -p /project/wolk/ADNI_longitudinal-Templates/Normal/Priors/priors%d.nii.gz \
   -r 0 -q 0 \
    $*

Options:

  • -g: denoise images. Generally considered a good idea.

  • -y: restrict SST template shape update. By default (0), don't use the rigid component of the average transform. This potentially biases towards baseline orientation, but greatly reduces template drift.

ANTsPyNet

ANTsPyNet function.

Algorithm:

  1. Create SST. One can pass a pre-existing SST, or pass an initial template to which all sessions are normalized.

  2. Run deep_atropos on SST.

  3. Affine registration of session T1w to SST.

  4. Use posteriors from (2) as priors for segmentation of each session T1w.

  5. Compute thickness using DiReCT.

Major differences between ANTs and ANTsPyNet pipelines

Stage ANTs ANTsXNet
SST construction transform SyN (in ANTS) Quick affine (or input own template)
Final transform of session to SST SyN Quick affine
Segmentation of SST Atropos with priors from group template Deep Atropos
Segmentation of session T1w In native space (by default, opt rigid to SST) In SST space
Transform to group template SyN Not computed

Challenges

Inconsistent imaging parameters between sessions

This includes changes in hardware (leading to quite different bias fields) as well as changes in spatial resolution, physical space origin, and FOV. The biggest problem is variable FOV, because it can cause SST construction to fail.

The workaround for this in ANTs is to use the "-y" option. This prevents the SST shape update from using the rigid component, but this can lead to a subtle bias because the first time point is used to initialize the template.

Alternative solutions:

  • Initialize SST to group template, +/- antsAI

  • Pre-process sessions to resolve differences in FOV, origin etc, then use -y 1

Conclusion: We agree that mixing protocols during longitudinal processing is probably a bad idea overall. Still, we should pre-process data such that the templates are as consistent as possible while still being unbiased (fix origins, normalize intensity).

Getting representative priors

Priors either come from a group template, or via prior "cooking". The latter approach involves using JLF to segment the session image, the smoothed labels then become priors.

The output is sensitive to the priors. Template priors are faster but can be less representative.

JLF priors are very slow, even with fast registration (greedy), it still takes a few hours to do the JLF. Also needs a representative atlas set with good priors for all tissue classes. Mindboggle labels are not good for this purpose, a hard-coded workaround is embedded in the ants script to deal with this.

Alternatives:

  • Segment SST with low prior weight, use those (downweight influence of atlas priors on final result). Minimum prior weight is about 0.2, below that the necessary spatial constraints won't work. The ANTs 6-class segmentation relies on spatial information, the classes are not well separated by appearance alone. So some prior weight is needed.

  • Use deep_atropos, synthseg or similar to get priors directly on SST.

Conclusion: We will use deep atropos on the SST. Undecided: should we segment SST with Atropos after deep_atropos?

Brain extraction

Built-in antsBrainExtraction.sh takes < 1hr and while it generally works well, it is outperformed by deep learning methods in ANTsXNet, SynthStrip, HD-BET.

Need to adapt pipeline to allow brain mask input (already done for cross-sectional pipeline).

Note that Deep Atropos requires the skull on, so if we import brain masks, we still need to have the original T1w for that step. Alternatively, we could run deep Atropos on session T1w beforehand, and average them afterwards.

Conclusion: We will use HD-BET masks from cross-sectional processing, the union of these in the SST space will be the longitudinal brain mask, applied to all sessions.

Other pre-processing

AC-PC alignment

Pro: Makes visualization easier, possibly makes registration to group template more robust.

Con: Extra interpolation of data, possibility it will fail.

Methods: BRAINS (Johnson), affine to MNI then extract rigid component.

Can apply after processing for QC purposes. Save the transform so that

Longitudinally, we can align the SST to AC-PC without extra interpolation.

Longitudinal Atropos parameters

Longitudinal prior weight? 0.5 or 0.25

Longitudinal SST transform

SyN for SST or only rigid? Prefer SyN

Interpolation options

Better interpolation? Or use Gaussian for priors as in ANTs

Lots of tools use Spline or WindowedSinc