Skip to content

Running RABIES on niagara

Gabriel Desrosiers-Gregoire edited this page Nov 26, 2020 · 18 revisions

First log in on to compute canada and install the singularity image:

singularity build rabies.sif docker://gabdesgreg/rabies:tag

The "tag" should be one of the available versions on docker hub https://hub.docker.com/r/gabdesgreg/rabies/tags . Then you can run the singularity container to execute RABIES by calling the newly created singularity image rabies.sif. Refer to RABIES documentation for further details on containers: https://github.com/CoBrALab/RABIES

In its current form (November 2020) RABIES comprises three separate steps: preprocessing, confound regression, and analysis, which must be run in sequence.

Before you begin preprocessing, it is essential that your data are formatted according to Brain Imaging Data Structure (BIDS) data format, as described in the RABIES github and here https://bids.neuroimaging.io/. You must follow the same syntactic rules for both folder hierarchy and file naming as described in these resources, more specifically, there must first be a folder for each subject named 'sub-{subject_name}' (where the {} are replaced by the subject name), and within each subject folder, there must be the session number folders ('ses-{session_name}') if relevant, then folders 'anat/' and 'func/' for the anatomical and functional images, and finally the corresponding anatomical and functional files must have the following syntax: 'sub-{subject_name}ses-{session_name}run-{run_number}{file_type}.nii.gz. File_type is 'bold' of 'cbv' for the functional image, and 'T1w' or 'T2w' for the anatomical image. The 'run-' specification is optional, and only relevant for the functional images if multiple runs were acquired. For example, a structural scan initially named mch_mia_111_21_20190916_112219_327681_mri_1.nii was changed to sub-11121/ses-1/anat/sub-11121_ses-1_T1w.nii, and the corresponding functional image was sub-11121/ses-1/func/sub-11121_ses-1_bold.nii. Note that you must not include '' in naming of file/folder specifications, as this will create errors in the parsing of the file information. Investing the time to initially rename and restructure your scans will save you work and effort downstream.

Preprocessing
The first step is to create a script which will execute rabies as a job on the compute niagara clusters. You can choose to either provide raw anatomical and functional scans, or if you have lsq6 outputs from a two-level model build, you can provide them and disable the anatomical preprocessing steps. In the example script below, that is the case. A thorough description of other options and flags can be found on the RABIES github page.

You use the following syntax to write a script (e.g. rabies_singularity_call.sh):

#!/bin/bash
#SBATCH --time=24:00:00
#SBATCH --nodes=1
#SBATCH --account=rrg-mchakrav-ab
module load singularity
singularity run -B /home/m/mchakrav/lanicupo/scratch/20201023_RABIES/nii_inputs:/nii_inputs:ro \
-B /home/m/mchakrav/lanicupo/scratch/20201023_RABIES/rabies_out:/rabies_out \
/home/m/mchakrav/lanicupo/scratch/20201023_RABIES/rabies.sif -p MultiProc preprocess /nii_inputs /rabies_out \
--disable_anat_preproc \
--anatomical_resampling 0.15x0.15x0.15 \
--autoreg

The "-p MultiProc" will activate the option to run the pipeline using the multiple cores available on niagara clusters (80 cores per node, so up to 80 parallel processes). The option #SBATCH --nodes=1 will specify the number of nodes to request. You can display the --help message by simply typing "singularity run /path_to_singularity_image/rabies-0.2.0.sif --help", to prepare your RABIES call.
Note the syntax involving -B and the paths. In a sense, we give those paths certain nicknames, so in this script when we refer to /rabies_out it is directed to the path /home/m/mchakrav/lanicupo/scratch/20201023_RABIES/rabies_out. Before running the script, create a file rabies_out to deposit your outputs.

I highly recommend that you execute the newly created .sh script by calling "bash rabies_singularity_call.sh" and making sure there is no errors in the syntax of the command. If it starts to run, or if you get an error saying insufficient space/memory it means your script will run correctly, but that there just isn't enough space on the initial node in Niagara, so it's time to submit the job to the cluster. To send the job request on the clusters: "sbatch rabies_singularity_call.sh". You can then visualize your job list through "squeue -u your_account_name".

When it has finished running, first make sure that no error appeared in the log file found in rabies_out/rabies_preprocess.log (scroll through and if at the end there is no error message, it has run to completion). Then, be sure to QC your outputs following the github recommendations (functional qc guide to come).

Confound Regression

This step takes the outputs from preprocessing as inputs. Set up a script as follows (rabies_conreg_call.sh):

#!/bin/bash
#SBATCH --time=24:00:00
#SBATCH --nodes=1
#SBATCH --account=rrg-mchakrav-ab
module load singularity
singularity run -B /home/m/mchakrav/lanicupo/scratch/20201023_RABIES/rabies_out:/rabies_out \
-B /home/m/mchakrav/lanicupo/scratch/20201023_RABIES/nii_inputs:/nii_inputs -B /home/m/mchakrav/lanicupo/scratch/20201023_RABIES/conreg_out:/conreg_out \
/home/m/mchakrav/lanicupo/scratch/20201023_RABIES/rabies.sif -p MultiProc confound_regression /rabies_out /conreg_out \
--apply_scrubbing \
--commonspace_bold

Here we give the nickname to rabies_out (which will be the input here), nii_inputs (from before) and conreg_out (our new output folder). As before, create conreg_out before running the script. The rabies_out folder can also be specified instead of conreg_out to add the confound regression outputs to the previous folder. This step takes significantly less time, so you can allocate less than 24:00:00 as needed!

Analysis

The final step in RABIES takes the output from confound regression as input and performs the analytical steps you specify.

Set up your script as follows (rabies_analysis_call.sh):

#!/bin/bash
#SBATCH --time=24:00:00
#SBATCH --nodes=1
#SBATCH --account=rrg-mchakrav-ab
module load singularity
singularity run -B /home/m/mchakrav/lanicupo/scratch/20201023_RABIES/conreg_out:/conreg_out \
-B /home/m/mchakrav/lanicupo/scratch/20201023_RABIES/rabies_out:/rabies_out -B /home/m/mchakrav/lanicupo/scratch/20201023_RABIES/nii_inputs:/nii_inputs -B /home/m/mchakrav/lanicupo/scratch/20201023_RABIES/analysis_out:/analysis_out \
/home/m/mchakrav/lanicupo/scratch/20201023_RABIES/rabies.sif -p MultiProc analysis /conreg_out  /analysis_out \
--group_ICA

As before, make sure you create analysis_out before running the script. Again, this step is shorter, so probably won't require an allocation of 24 hours.

Clone this wiki locally