Running Nextflow / nf-core Pipelines
Last updated
Last updated
MIT Resources
https://accessibility.mit.eduMassachusetts Institute of Technology
Nextflow is a system which allows you to build reproducible pipelines. It chains together simple actions to create a complex data analysis pipeline. People have used Nextflow to create bioinformatics pipelines for many different operations, including RNASeq analysis, Hi-C analysis, etc.
NF-Core is a "a community effort to collect a curated set of analysis pipelines built using Nextflow." You can find many popular bioinformatics Nextflow pipelines on nf-core's website.
We can take advantage of nf-core on our cluster by installing it in a Conda environment. Before doing so, however, we must set a couple of environment variables in our ~/.bashrc
files that Nextflow and nf-core need to correctly cache the Singularity images they'll be using throughout the pipeline.
Edit your ~/.bashrc
file and append these environment variables to the end of the file:
To make sure these environment variables are set, you can either log out of Luria and log back in, or run the following to load the new shell environment:
Nextflow and nf-core are installed through Conda, so we'll want to make sure we activate the Conda module before starting:
They also require us to have specific channels configured:
Once these channels have been added, we can go along with the installation:
Once installed, update software
You can either check nf-core's website to check what Nextflow pipelines are available, or you can use the command line nf-core
tool. The command line tool will also give you information about what pipelines you have installed, the version installed, the last time you used them, etc.
Nextflow pipelines all require the revision number and different parameters for running. You can see what parameters are available for a particular revision of a pipeline and which are required at the pipeline's corresponding web page, or by running the pipeline without any parameters and reading the Nextflow error log.
Nextflow also requires you to specify a "profile" for running a pipeline. A profile is essentially a set of sensible settings that the pipeline should run with. Each pipeline has its own profile specific for itself, and two test profiles: test
, which runs the pipeline with a minimal public dataset, and test_full
, which runs the pipeline with a full-size public dataset.
In addition to these, nf-core provides profiles for common containerization software, such as Docker, Podman, and Singularity.
We're going to run an example rnaseq pipeline using rnaseq pipeline v3.14.0. The parameters for this pipeline are enumerated here: https://nf-co.re/rnaseq/3.14.0/parameters. The two required parameters are --input
, the "path to comma-separated file containing information about the samples in the experiment" and --outdir
, "the output directory where the results will be saved."
We'll use the test profile to ensure the pipeline can install and run correctly. We'll also use the singularity profile since Luria is set up for use with singularity. The test profile will give the pipeline its own inputs, so we'll only need to specify --outdir
. Make sure you load in singularity since we're setting the singularity profile, instructing Nextflow to use singularity to set up the pipeline.
Nextflow will begin to download the necessary Singularity images to run the rnaseq pipeline v3.14.0. This should take anywhere between 7-12 minutes. Since we've set the necessary environment variables for Nextflow to see the Singularity image cache, subsequent runs of this revision of the pipeline will start up much faster.
As the Nextflow pipeline runs, it will put metadata into .nextflow/cache
and other data into the work/
directory. If the pipeline errors out at any point, you can read the error log, fix the issue, then add the -resume
flag to your command to resume from where you left off. Nextflow will read the metadata and data it generated in the previous run to know where in the pipeline to start back up from.
Once the pipeline is finished setting itself up, it will run with a minimal public dataset as input, then output the results into the test/
directory we specified. This directory will have extensive information about multiple points of the run.