Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using Portcullis on large Datasets in connection with Mikado #62

Open
cc-prolix opened this issue Oct 13, 2022 · 0 comments
Open

Using Portcullis on large Datasets in connection with Mikado #62

cc-prolix opened this issue Oct 13, 2022 · 0 comments

Comments

@cc-prolix
Copy link

Hello,

I am using Portcullis to analyze and quantify splice junctions of data generated with HISAT2. I have around 3000 BAM Files and am wondering what the best practice would be if I were using the generated splice junction information as input for Mikado (https://mikado.readthedocs.io/en/stable/). As Portcullis merges the provided BAM files if more than one was provided, I wanted to ask if I should analyze each file separately or all at once (and merging them) if I want to use them for your Mikado Pipeline?

Thank you very much for your help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant