Skip to content

Roadmap to DBM analysis (mice)

Lani Cupo edited this page Jul 5, 2024 · 2 revisions

Introduction

If you are running your DBM with in-house tools following our documentation, there are lots of helpful wikis to guide you through the process, but it may be a bit confusing to know where they are and know which ones are useful. To that end, the goal of this wiki is 1) provide a high-level overview of the pipeline 2) to point you towards all of the useful wikis and 3) to fill in any gaps. Please bear in mind that this is a living document--if something is confusing, or if there is an error, edit it! Be sure to check to make sure you are using the most up-to-date versions of everything.

High-level overview of DBM pipeline

Quality Control

Convert to niftis

After you have acquired your raw data, the first step is to convert the Bruker data to niftis. To do so, follow the wiki here.

Manual QC

Once all of your data are converted to niftis, you can QC them manually. Typically for mouse data, you should either do this process 2-3 times yourself or have multiple raters QC them to make sure that you have accurate assessments. You can be guided by this wiki, but it is a bit of a skill that needs to be developed. You may want to include borderline images in the preprocessing step in case it improves any of the issues you noticed.

Preprocessing

The preprocessing depends on your study a bit. A guide to the standard preprocessing can be found here. If, however, you are working with embryos, use this preprocessing guide. If you are working with neonates, information can be found in the subsection here.

After preprocessing, be sure to look at your outputs to make sure nothing looks weird. For example, sometimes an image is moved partially out of the FOV.

Running DBM

In "recent" years (2022-2023) we moved away from the Pydpiper tools published in our older papers to Gabe's newer registration tools. As a result, the thorough documentation found here is marked "old". While some aspects are no longer applicable, there is still value in other sections. For example, the "Looking at Results" section which I will address more later. More applicable is the github here, however, if this is your first time running DBM, it may be a bit confusing what you need. Again, if you are using neonates, be sure to check out this wiki.

In general, you will likely run the DBM on Niagara. If this is your first time using Niagara, follow these instructions.

First, after logging into Niagara, please note the hostname of the machine you are on, you will need to remember it if you lose your connection and need to get back to the running pipeline.

# Example
> hostname
nia-login06.scinet.local

You will need to run your pipeline in a detachable terminal multiplexer, either screen or tmux

Run your mutliplexer:

screen # or tmux!

Load appropriate modules (or add them to a run script, even better!):

module load cobralab/2019b

Get a copy of the twolevel GitHub directory

git clone https://github.com/CoBrALab/optimized_antsMultivariateTemplateConstruction.git

Now, you need to copy several things to Niagara to run the DBM. You will need all of your preprocessed images in nifti form. Let's say you are working in a directory on Niagara called DBM_test. I like to copy all of my inputs to a folder in that directory such as follows: rsync -avz *lsq6.nii.gz NIAGARAUSERNAME@niagara.computecanada.ca:/home/m/mchakrav/USER/scratch/DBM_test/inputs/

Then, you need to create a csv with the path to those files. It can be the relative path. To get this you can do the following in the niagara terminal: ls inputs/*.nii.gz >> input_files.txt Importantly, if you have longitudinal data, the format of this txt file must be one row per subject, one column per timepoint. So, if you have 40 subjects, three timepoints, you will have 40 rows with three columns.

So, now in the DBM_test directory you have input_files.txt, and inputs/ folder containing your niftis, and the optimized_antsMultivariateTemplateConstruction folder you cloned from the Github. This folder contains the scripts you will need to run. The minimal run command for the modelbuild in this case would be as follows if you are running a 1-level (cross-sectional): optimized_antsMultivariateTemplateConstruction/modelbuild.sh input_files.txt

Or, if you are running a two-level (longitudinal): optimized_antsMultivariateTemplateConstruction/twolevel_modelbuild.sh input_files.txt

You may want to select different options, such as including a starting target or increasing the time attributed to various stages, but these are decisions you will make along the way, and not generic enough to be included here. Regardless, it's a good idea to save these commands in a .sh file such as:

module load cobralab/2019b
optimized_antsMultivariateTemplateConstruction/modelbuild.sh input_files.txt

so that you can re-run it later and keep a record of what your commands were.

Once the modelbuild has successfully run, you will now run the DBM. Don't move anything, because it will look for the outputs of the DBM in their expected naming conventions. For a onelevel, simply run optimized_antsMultivariateTemplateConstruction/dbm.sh input_files.txt

Or for a two-level: optimized_antsMultivariateTemplateConstruction/twolevel_dbm.sh input_files.txt

QC'ing Outputs

Once your DBM is done running, it is a good idea to QC the outputs. The first thing to look at is the average that was built. It is found here: output/nlin/average/template_sharpen_shapeupdate.nii.gz.

Visualize it with Display to make sure there is nothing weird, such as the ghost of misregistered scans. It can be helpful to change the view to spectral (D - S) and then dropping the sliding bars on the side to the extremes to show if there are any hidden misregistered scans.

Preparing for analyses in R

Once you have assured your run is high quality, you can download the outputs you want for your analyses. It's also a good idea to save your run file (run_modelbuild.sh or run_dbm.sh). This would allow you to have good documentation and easily update it if you include other options.

In addition to the average that you saved (template_sharpen_shapeupdate.nii.gz), the key thing you will need for your analyses are the Jacobians, which can be found here: ouput/dbm/jacobian/full OR relative/smooth/*.nii.gz. Full are the absolute Jacobians that include the affine transformations and relative are the relative Jacobians that include only nonlinear transformations. The ones you want to analyze are likely determined by your research question, but you can save both to be safe.

To analyze the data voxelwise in R, you will need 1) your average 2) a mask of the brain 3) the Jacobians in minc format.

Transform both your average and Jacobians to mincs with nii2mnc. To make a mask, follow this wiki.

The "Looking at Results" section of this wiki is still useful for setting up your R script to run an LM/LMER.

Saving heatmap outputs of mincLMs/mincLMERs

To visualize the heatmaps of results automatically, follow this wiki.

Identifying and plotting peak-voxels

Next, you will want to identify peak-voxels and plot them in either a boxplot or scatter-plot with lines, depending on whether your data are cross-sectional or longitudinal. This is especially helpful for understanding the direction of interaction terms or time terms (is volume decreasing, or is it just reduced in growth rate compared to a control group?)

You can identify peak voxels one of two ways. The first is to open the stats files on the average, as described in detail here. You can hover a specific voxel with your cursor and see the coordinates in the bottom left of the Display window. The world voxels you will need are Xw, Yw, and Zw. Then, you would use an atlas to identify where the voxel is located. This is a good starting place, but it is more replicable to follow the second option.

The second is to use a label-file appropriate to your age group to automatically identify and label peak voxels, described here.

Once you have identified your peak voxels, you want to plot them properly, following the recommendations here.

To properly plot the voxel, you will have to rerun an LMER at precisely that voxel by identifying it with world coordinates so that you can extract the parameters of the model. For example, I used the following function to supply world coordinates and a region name and return the plot:

plot_peak_voxel <- function(Xw, Yw, Zw, region){
  data$voxel <-mincGetWorldVoxel(filenames = data$file , v1 = Xw, v2 = Yw, v3 = Zw)
  model_voxel = lmer(voxel ~ treatment * poly(I(age-24),2)*sex + (1|ID) + (1|mom_id), data = data)
  fitdata_voxel = as.data.frame(Effect(c("treatment", "age","sex"),model_voxel, xlevels=list(age=seq(20,90,5))))
  p <-  ggplot(data = data, aes(y=voxel,x=age, color=treatment, group=treatment)) +
    geom_point() +
    geom_line(data = fitdata_voxel, aes(y=fit),size = 2) +
    geom_ribbon(data = fitdata_voxel, aes(y=fit, ymin=lower, ymax=upper), alpha=0.2) +
    xlab("Age") +
    ylab(bquote('Absolute Jacobians')) +
    theme_classic()+
    labs(fill = "Treatment", group="Treatment", color="Treatment", title = paste0("Peak Voxel in ", region))+
    scale_color_manual('Treatment', values = c('SAL' = '#420085', 'THC' = '#018700'))+
    scale_size_manual(values=c(2,2))+
    theme(plot.title = element_text(hjust = 0.5,size = 16, face = "bold"))+
    theme(axis.text=element_text(size=18), axis.title=element_text(size=20,face="bold"),
          legend.text=element_text(size=16), legend.title=element_text(size=18))+
    scale_x_continuous(breaks=c(20,40,60,90))+
    facet_wrap(~sex)
}

In this case, data is the data frame, where file is the Jacobians. Having it in a function, you can then call it later in a loop if you used an automated approach.

Remember at this point the result of that model does not matter, you have already identified the peak voxels.

Preparing figures for posters/papers

To be continued...