Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Nick" parcellation for mouse brains #133

Open
araikes opened this issue Sep 4, 2024 · 25 comments
Open

"Nick" parcellation for mouse brains #133

araikes opened this issue Sep 4, 2024 · 25 comments

Comments

@araikes
Copy link

araikes commented Sep 4, 2024

Hi,

First, thank you for the recent work on mouse brains. I'm genuinely excited about and the potential to accelerate some of our work. I have one quick question (as of right now, anyway):

In the "mouse_brain_parcellation" function, ROIs are defined as follows:

 which_parcellation : string
        Brain parcellation type:
            * "nick" - t2w with labels:
                - 1: cerebral cortex
                - 2: cerebral nuclei
                - 3: brain stem
                - 4: cerebellum
                - 5: main olfactory bulb
                - 6: hippocampal formation

I downloaded the 50um DevCCF P56 brain and "Nick" parcellation. Looks like the hippocampus and olfactory bulb labels are swapped (hippocampus has label 5):
image

However, I run the brain parcellation on my own data, and (aside from some pretty obvious segmentation errors), it appears that the labels reflect the script's parcellation scheme:
image

Any issue with the labels in the original parcellation image being potentially mismatched?

@ntustison
Copy link
Member

Thanks. Yes, labels 5 and 6 are swapped between the description and the mask but that doesn't matter for the results. Everything that is visible to the user at the interface level (i.e., description and results) should be correct.

@ntustison
Copy link
Member

Actually, let me double check that on my other machine. The labels don't necessarily have to line up but I want to make sure that's actually what I did for training.

@ntustison
Copy link
Member

Okay, so thanks for pointing this out. I had actually changed it during training and hadn't updated the mask on figshare. That should all be fixed now. Make sure you delete the cached mask in ~/.keras/ANTsXNet/ before trying it again.

@araikes
Copy link
Author

araikes commented Sep 4, 2024

Will do.

So with that in mind, is the cortical thickness pipeline supposed to be combining the cortex with the hippocampus or with the olfactory bulb?

@ntustison
Copy link
Member

I'll have to verify to make sure but that'll probably need to change as well.

But, ignoring this immediate potential labeling problem to be fixed, the specific labeling choices I made as to what regions should be used to calculate cortical thickness were somewhat arbitrary and so I concede that a better selection might exist. I honestly don't know. But that's why I think what we proposed in the paper is so nice because one can easily tailor the specific parcellation towards a potentially better choice.

@ntustison
Copy link
Member

No, sorry, I take that back.. The final label results should be correct, just not the input mask.

You should be good to go once you update the repo and clear the cache. Let me know if that's not the case.

@araikes
Copy link
Author

araikes commented Sep 5, 2024

No complaints about the ROI choices. I was just curious if the cortical thickness ROIs were intentional relative to the original order in the mask or the updated mask.

With respect to running these currently ready tools, most of my work deals with ex-vivo mouse brains. I've tried using the tutorial snippet to run the brain masking on my data and it's good but not great (catches some of the mouse face and misses the cerebellum). Is it just a matter of training my own mask network?
image

@ntustison
Copy link
Member

Yeah, if you take a look at the preprint describing this work, I used two templates that I built from publicly available data sets (that I just happened to find online) with aggressive data augmentation. And although I've done something similar in the human brain, the problem in mice is that image quality, generally, seems much more varied.

One possible solution is that you could make a template from your data and provide a mask (perhaps using ANTsXNet tools to get an initial mask) and I'd be more than happy to retrain the network with this additional data point. You might've noticed something related in the parcellation code. I had a couple people reach out and ask about training their own parcellation scheme. I walked them through the training process using code that I've posted elsewhere and we decided to post it as another option.

@araikes
Copy link
Author

araikes commented Sep 5, 2024

Yeah, I saw that other post yesterday which got me thinking about (re-)training but I wasn't certain whether the existing network could be augmented with the an ex-vivo dataset or if it would need its own separate network. I definitely have a template + mask (actually, I have a couple since we do study-specific templates) and I'd be interested in whether it improves the masking. I can either send it to you or if you can walk me through the retraining for the mask, I can try it as well.

Of note, if I pass an existing mask to either the parcellation function or the CT function, it seems to work really well:
image

@ntustison
Copy link
Member

Yeah, that makes sense---brain extraction is probably the most crucial step to ensure quality downstream processing.

For the templates---can you just post a couple screenshots so I can see if it would be feasible to incorporate them into refining the existing network?

@araikes
Copy link
Author

araikes commented Sep 5, 2024

Template

image

image

Mask overlay

image

GIF of whole brain

template_sharpen_shapeupdate (1)

@ntustison
Copy link
Member

Nice. Thanks.

So if you wanted to simply use brain extraction through the ANTsXNet tools, we could include that template in the refining of the current network. It's definitely higher quality than the camri template I used and would complement the second "bsplineT2MouseTemplate.nii.gz" that you have probably already downloaded to ~/.keras/ANTsXNet/ by using the mouse segmentation tools. I'm currently occupying my two local GPUs with training but one should open up tonight and I could put this on and it would be done by early next week, I'm pretty sure.

But if you wanted to do the training yourself, you could do that as well using these data and the scripts I linked to previously.

@araikes
Copy link
Author

araikes commented Sep 5, 2024

Ok. Let me see if I can do it and if I run into problems, I'll let you know.

Due to how things are set up on my HPC (CUDA 12.4), I'm running ANTsPyNet in a singularity container built on tensorflow 2.17. Are there any specific edits (aside from also reading in my data and adding it to the list of templates and masks) I need to make to either batch_generator.py or train_model.py from their current state on the repo (e.g., more batches, loading the existing weights, etc) or should I just be able to run as is (again, with the obvious of loading my data)?

@ntustison
Copy link
Member

In general, I hesitate to provide much support at all for training because it's so system-dependent. In addition, although I publicly share all my scripts, network training is currently outside the scope of ANTsXNet.

@araikes
Copy link
Author

araikes commented Sep 5, 2024

That's fair. I just wasn't sure if there were specific things that needed to be edited. For instance you referenced doing "aggressive data augmentation" and where the default on batch_generator.py is 32 batches, I wasn't sure if the 4 batches in train_model.py constituted the same level of "aggressiveness" that you referenced.

Same with the filter size, since in train_model.py it's set up as:

number_of_filters = (8, 16, 32, 64)
mode = "classification"
number_of_outputs = 2

# number_of_filters = (16, 32, 64, 128)
# mode = "sigmoid"
# number_of_outputs = 1

whereas the brain extraction function sets up a UNet and the weights that are imported match the sigmoid model, not the classification model:

unet_model = create_unet_model_3d((*template_shape, 1),
number_of_outputs=1, mode="sigmoid",
number_of_filters=(16, 32, 64, 128),
convolution_kernel_size=(3, 3, 3),
deconvolution_kernel_size=(2, 2, 2))
weights_file_name = get_pretrained_network("mouseT2wBrainExtraction3D")
unet_model.load_weights(weights_file_name)

@ntustison
Copy link
Member

"Aggressive data augmentation" refers to the spatial and intensity transforms used in batch_generator.py, not the number of batches.

Again, the training scripts are not meant for formal distribution so I might've tried out different things and commented out other things. Whatever final network was trained and posted would ultimately have to match with what's in ANTsXNet.

@araikes
Copy link
Author

araikes commented Sep 11, 2024

Well... My attempt at it didn't go as well as I would have hoped. If you're open to re-training the model including one of our templates, I'd definitely be interested in whether that improves our successful masking. If you're still willing, let me know where/how I can share the data.

@ntustison
Copy link
Member

Sure, I could do that. I'm heading out for a run but I could put it on the GPU when I get back, as long as you wouldn't mind me updating the weights for the network. I obviously wouldn't post the template and mask but I'd want to make the new weights available.

@araikes
Copy link
Author

araikes commented Sep 11, 2024

Yeah, that's fine, especially if it makes masking feasible in my ex-vivo data. Just let me know how to get it to you.

@ntustison
Copy link
Member

Great. Perhaps you can send a link to my email (ntustison@gmail.com) where I can download the template and mask and I'll get it started when I get back.

@araikes
Copy link
Author

araikes commented Sep 11, 2024

Sent. Let me know if there are issues.

@ntustison
Copy link
Member

Okay, I updated the weights in the repo and the added variance definitely improves the results on the independent data set that I've been using.
Current
New
Let me know if you're seeing a similar improvement on your data. If you need further refinement, I would suggest trying to get the training scripts I sent you to work. If you can do this, you could easily add your own subjects to refine the weights further.

@araikes
Copy link
Author

araikes commented Sep 16, 2024

That's terrific. I'm gonna have a chance in the middle of the week to try to assess this in my data. A quick single image test looks like it over segments into the skull but it's certainly a really good starting point if nothing else.

I have two projects that are engaging my brain power the front part of the week but I'll tackle a more comprehensive analysis here shortly.

@ntustison
Copy link
Member

Great.

A quick single image test looks like it over segments into the skull but it's certainly a really good starting point if nothing else.

Yeah, again, I would very much recommend trying to get the training to work on your end with your own data. It would then be really easy to simply add your images and masks to the list of "templates" in train_model.py to tailor these weights specifically for your data.

@ntustison
Copy link
Member

Specifically these lines. You shouldn't have to change batch_generator.py.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants