The shotgun metagenomics pipelines use whole genome shotgun data for taxonomic and functional characterization. They are not designed to work with amplicon data from a single region, but rather make use of marker sequences from along the entire genome. The pipelines may give errors or unexpected output:
The optional parameters were carefully chosen based on (1) the most common scenarios of NGS data analysis, (2) the suggestions from the developers, and (3) published results. The different pipelines available on Nephele target different kinds of NGS studies, such as whole genome shotgun sequencing, 16S microbiome survey, and functional annotation of microbial community.
Most users submit their jobs with default values of the optional parameters. In our experience, more experienced bioinformaticians change the parameters to optimize their input data. We also have received feedback from novice microbiome researchers or students that they study the optional parameters (reading help text and testing different values-even if it fails) as a part of learning microbiome analysis.
Why would you want to run Nephele's Pre-processing QC pipeline before you run a microbiome analysis?
Studies show that quality filtering can greatly improve microbiome analysis results. Best practices on working with sequencing data include doing a series of QC steps to verify and even improve the quality of the data. Our Pre-processing QC pipeline was designed to run a quality check by default, so the user can run it without choosing any options and receive FastQC tables and graphs providing information on the quality of individual samples. After evaluating these results, the user can submit their files to an analysis pipeline or return to the QC pipeline to trim reads and merge read pairs as needed.
Even though our 16S and WGS pipelines include quality filtering, trimming and merging steps, it may be best to run those processing steps separately ahead of time. We have incorporated the tools cutadapt and Trimmomatic in our Pre-processing QC pipeline steps to give users more control for modifying parameters, which can be helpful for some datasets, especially if the amplicon region is variable length. For the read merging step, we have integrated the FLASH merger, which some results show might provide better precision and recall than the native tools used by QIIME1 and mothur. For longer amplicon regions with a short overlap between paired reads, FLASH may perform better than the DADA2 merger. So, we designed the QC pipeline to provide these programs for our users as well to help them get better results. For, more information about the tools we use see the details page.
Some usage examples:
Truncation length
parameter in DADA2 or the Minimum Phred quality score
parameter in QIIME1.The developers of QIIME released QIIME 2.0 originally in 2017 and announced they would discontinue the support for QIIME (version 1.9). A manuscript was published in July of 2019 to describe the new plugin-based architecture of QIIME 2.0. The original version of QIIME offered clustering tools such as uclust and usearch for closed, open, and de novo OTU clustering. It was common practice to use the open-reference clustering at 97% similarity. The new version includes plugins such as DADA2 (running the DADA2 R package ) and Deblur that improve quality control, perform denoising and return sequence variants. The documentation of QIIME 2.0 recommends the use of these denoising algorithms over the clustering methods used previously. For researchers who still found it useful to cluster the reads into OTUs, the QIIME team later included plugins for clustering using vsearch into their QIIME 2.0 architecture.
The Nephele team has adopted QIIME 2.0 for the clustering steps (QIIME 2.0 16S pipeline), the Deblur denoising algorithm (QIIME 2.0 16S pipeline), and several visualization options in the Downstream Analysis Pipeline (Explore tab). Even though QIIME 2.0 also offers a plugin for DADA2, the Nephele team decided to implement a separate pipeline using the native DADA2 R package.
Suppose you are a user of the QIIME pipeline and are wondering which pipeline to use after Nephele retires QIIME 1.9 pipeline (OTU clustering method). In that case, we recommend that you adopt the denoising method available in the DADA2 pipeline (for paired-end or single-end) or in the QIIME 2.0 pipeline when you use the Deblur option for single-end reads. Alternatively, you could decide to continue using clustering-based methods available in the new QIIME 2.0 16S pipeline (vsearch option) or the clustering method available on the mothur pipeline if you have a short amplicon design such as the V4 16S region and good quality data.
The developers of QIIME released QIIME 2.0 originally in 2017 and announced they would discontinue the support for QIIME (version 1.9). The current ITS pipeline on Nephele is based on QIIME 1.9, therefore a better supported method was needed. The Nephele team has decided to use DADA2 which improves quality control, performs denoising and returns sequence variants. The pipeline is based on the DADA2 ITS Tutorial.
QIIME2 is a framework which runs other third-party tools for analysis - including VSEARCH and DADA2. For computational reasons, we sometimes use the QIIME2 framework, as in our VSEARCH and Downstream Analysis pipelines, but for our DADA2 pipeline, we run the DADA2 package directly. This allows us to provide more detailed output and have more flexibility in user options. See pipelines descriptions for more information.
These are the pipelines we recommend for most datasets. If you are new to metagenomics/genomics, we suggest using the recommended pipeline, as these are the most robust, efficient, and generally more accurate based on the literature and our testing. Our other pipelines are for more advanced users who want to try other tools.
DiscoVir can use the metagenomic assembly scaffolds and bam (made from mapping reads back to the scaffolds) files from the WGSA2 pipeline. You can find the FASTA files and bam files in the asmb_files directory of the WGSA2 outputs folder.
logfile.txt
file, which can be found directly on the results download page as well as in the
PipelineResults.JOBID.tar.gz
directories. Specifically, you can do a text search for ERROR
to see some common errors that can
arise with data analyses on Nephele. Many of these errors are described further in additional FAQs here, which provide detailed suggestions or solutions.
If you continue to have issues, please do not hesitate to send us a
support request.
cluster.split
command, and if any are larger than 36Gb (or 60% of available memory), we do not run the rest of the pipeline. In this case, OTU clustering and visualizations will not be made.
logfile.txt
. It is possible that one or more of your samples did not have the minimum number of OTUs or reads and was excluded from further analysis. This will be indicated in the logfile.txt
output.
samples_being_ignored.txt
file. You may also look at the logfile.txt to see why those samples have been excluded. Samples that have low OTU or sequence variant counts are sometimes removed because of the Sampling depthcutoff parameter. If you do not specify the parameter, please see FAQ:How is the sampling depth calculated? for more information. If you open the
otu_summary_table.txt
file, you can see OTU counts for all of your samples. Adjusting the Sampling depthparameter accordingly (i.e., entering a value that will include all of your samples) in a new run with the same data will resolve this issue. The parameter can be set under the Analysis tab of the job submission page, and you can use the job resubmission feature of Nephele to more easily resubmit your data with a different value.
The DADA2 pipeline is highly sensitive to sequence quality and primer trimming. It is very important to specify the correct primer lengths at job submission (or remove the primers from the data before submitting), as these sequences may interfere with the denoising of the reads as well as with chimera removal (if you are in doubt of the primer lengths, we advise you not to choose the chimera removal option). See this DADA2 FAQ for more information.
The DADA2 pipeline produces quality profile plots that you can look at to gauge the quality of your data (qualityProfile_R1/2.pdf). If the data is poor quality, the reads may be filtered out during the filterAndTrim
step. You can also see a table in the log file of how many reads pass this step. Additionally, if the data is poor quality, reads that pass the filter may be trimmed too much in the filterAndTrim
step, and may not merge properly in the mergePairs
step. You can search the log file for paired-reads
for how many reads successfully merged for each sample. Sometimes, it is helpful to use a trimming program such as cutadapt, Trimmomatic, or BBDuk to trim for quality (and/or primers) prior to running DADA2. You can use Nephele's QC pipeline to do this pre-processing of your data; see here for more information.
We have written a custom script that can perform this task for you, which you can download here.
Next, gather the files you plan to combine into a single directory.
When you download and unzip the results from your job, you will have a directory called outputs/
.
Navigate to TAXprofiles
or PWYprofiles
and find the bin
directory.
It should contain a number of files ending in 4krona.txt
.
Move these files from each Nephele job into a single, combined folder.
The file names themselves can change as desired, as long as TAX and PWY files are not combined.
Then, from the command line, run the Rscript command. Rstudio also includes a "Terminal" tab next to the "Console" tab, that can run Rscript commands.
Let's say I have gathered all 4krona files into combine_my_files
. I would next run:
Rscript WGSA2_MergeKronaFiles.R --binDIR combine_my_files --objTYPE text --outFILE analysis_table
This would result in an output file called analysis_table.txt
, which would be a complete table from all files contained in combine_my_files
.
We currently support output into text, phyloseq, or biom formats for easy transfer to your preferred analysis software.
Please refer to the Release Notes to see when Nephele updates were made. Also, in the initial email you receive for each job, you will find the version of Nephele that corresponds to the Release Notes, as well as a copy of all the parameters that were selected for that job. Software package versions for the pipelines are also listed in the log files and current versions can be found on the pipeline details pages.
Choosing a sampling depth is generally arbitrary. Generally, it's recommended to choose a value high enough that you are able to capture the diversity present in samples with high read counts, but low enough to include the majority of your samples. For a simple community with only a handful of abundant members, for example, a sampling depth of 5,000 or less may suffice for an accurate estimate of diversity. For a more complex community with many low abundant members, however, a much higher value for sampling depth, 10,000 or higher, is generally necessary.
Nephele specifies the sampling depth of 10,000 reads as the minimum requirement for all downstream analysis. The pipelines use the following logics to determine the sampling depth:
Note: Users are encouraged to specify the sampling depth that is most appropriate for their studies. There is really no formula that can precisely determine the most appropriate value simply based on the distribution of read counts and the number of samples. If the pipeline does not generate any downstream analysis for you samples, it is most likely that the sample with the least number of reads is below 10,000. You will need to lower the sampling depth in order to run the downstream analysis.
You can use a Unix utility like wget (FAQ) to transfer files to your computer using a terminal program (e.g. a Linux terminal, macOS Terminal.app, or Windows Command Prompt/PowerShell). Right click on any button or link for downloads on the Nephele website, and copy the link address to use with wget on the command line. Here is an example command for downloading job results:
wget -O results.tar.gz "https://nephele.niaid.nih.gov/result_link/1bee6ca12909"
Downloading via command line is useful if you would like to transfer job results to an HPC or other remote machine or if the file to download is large and it may take a while for the transfer to complete.
WGSA2 TEDreads can be downloaded similarly, but will have a .tar-only extension.
https://nephele.niaid.nih.gov/metrics/<job_id>