Another Bioinformatics website

Tag: ngs

Discovering tumor neoantigens is easier than getting a job in Spain

The story of this post started few months ago at a immunotherapies conference in Barcelona, at this time I was looking for a job in cancer research. I had a great time learning about new cancer therapies and meeting nice people, but few doctors were interested, or had money, to contract a bioinformatician. Nevertheless, one of them told me: “I don’t know you, but if you show me that you are able to discover tumor neoantigens, then I’ll consider to contract you in the future”. I hope this person will read the post 😉


But before starting, I want to dedicate this post to all the doctors, nurses, scientists and patients that fight everyday against cancer, specially those from Hospital Miguel Servet de Zaragoza and Clínica Universidad de Navarra that treat my father, also to many other brilliant and devoted people that I have met from places like Vall d’Hebron Hospital, Quiron Dexeus or 12 de Octubre, and finally, to my new team at Agenus Pharmaceuticals.


State of the art of tumor neoantigen discovery

With next generation sequencing (NGS) we are able to sequence the full genome of a tumor or at least of the coding regions or exome. If we identify with the help of NGS data which tumor-specific genes express mutant peptides that are able to activate T cells, then we can design vaccines to boost the natural immune response to the tumor, this mutant peptides are also called neoantigens. There is a nice and recent Nature Review about neoantigens discovery, and also there are interesting articles with practical cases of neoantigens discovery and personalize treatments for cancer patients (e.g. Robbins et al., Pritchard et al.).

But we are interested in the bioinformatics point of view, so let’s make a summary of some neoantigens discovery protocols and software in the literature:

  • pVAC-Seq v4.0: a pipeline for the identification of personalized Variant Antigens by Cancer Sequencing (pVAC-Seq) that integrates tumor mutation and expression data (DNA- and RNA-Seq) (original article).
  • TSNAD: an integrated software  for cancer somatic mutation and tumour-specific neoantigen detection (original article).
  • neoantigenR: an R package that identifies peptide epitope candidates (biorxir article).
  • MuPeXI: mutant peptide extractor and informer, a tool for predicting neo-epitopes from tumor sequencing data (original article).
  • INTEGRATE-neo: a pipeline for personalized gene fusion neoantigen discovery (original article).

We can conclude that the topic is quite new, most of the previous pipelines have been published during the last year, and that the neoantigen identification is a very complex task as we can see for example in the pVAC-Seq workflow schema:

Overview of the pipeline pVAC-Seq

To show an example of neoantigens discovery pipeline, I’ll try to simplify the process and follow the Best Practices guide from The Broad Institute for somatic variants discovery:

Best Practices for Somatic SNV and Indel Discovery


Tumor neoantigen discovery pipeline description

This protocol has been developed for didactic purposes and it is not formally validated neither intended for human diagnosis.

Basically I propose here a somatic variant calling pipeline with some additional filtering of the variants at the end. I tested the pipeline with whole exome sequencing (WES) Illumina paired-end reads from normal and tumor tissues.

The full pipeline can be summarized in 10 steps, here is briefly explanation of every step and some additional considerations about the chosen tools:

  1. Normal and tumor reads mapping, sorting and indexing:
    • In the protocol I have chosen BWA-MEM as recommended in Cornish and Guda paper, original authors used Novoalign, but other aligners could be considered. In this aspect, aligners as Bowtie2 may be too sensitive and could report some extra false positive variants, but it’s just a personal opinion.
  2. Removal of duplicated reads:
    • Maybe this step could be skipped as there are not many redundant reads and it’s unlikely that they map in variant positions. MarkDuplicates from Picard tools was used, I would suggest in the future to try ‘MarkDuplicatesWithMateCigar’ from the same suite, but I have not enough experience to recommend it.
  3. Adding sample/group information to the mapped reads:
    • This is just a formal requirement for the variant callers to work properly and distinguish between normal and tumor groups of reads.
  4. Realignment around indels:
    • This step is already integrated in the Mutect2 somatic variant caller and could be redundant, but other variant callers do not integrate it so it is strictly recommended to avoid false calling of indels and surrounding SNPs.
  5. Somatic variant calling with MuTect2:
    • MuTect2 is integrated in the GATK suite, the basic operation of MuTect2 proceeds similarly to that of the popular HaplotypeCaller, with a few key differences, taking as input tumor and normal alignments and producing somatic variant calls only.
  6. Filtering of somatic variants:
    • Some strict filters are imposed to discover only true somatic variants: a minimum depth of 10 reads and a variant frequency of at least 10%. Additionally, variants must be in coding regions of genes (it is filtered in the last step).
  7. Annotation of filtered somatic variants
    • Filtered variants are annotated: gene and transcript names and IDs, chromosome position, nucleotide and protein mutations and other relevant information about the antigens is annotated in a tabulated, CSV or HTML human readable format
  8. Neoantigen peptides design
    • Neoantigen peptides are designed extracting the amino acid sequence containing the variant (10-15 residues)
  9. HLA-typing with exome or RNA-seq data
    • Normal and tumor or RNA-seq (more accurate) data will be used as input of AmpliHLA to retrieve the HLA alleles of the patient.
  10. HLA-binding affinity calculation and peptide ranking
    • NetMHCpan software will calculate the binding affinities of the designed peptides to HLA alleles, these predictions together with expression data will be used to estimate the potential immunogenicity of the peptides.

Required tools and files

Before starting, you should install in your server or computer the following tools:

And download human genome files and SNP/indels annotations (be always consistent with the genome version and have in mind that new versions could be available):

Previous files (or equivalent ones) can be also downloaded from Google Cloud repository (not tested).

After this long introduction, let’s start the analysis…


Somatic variant calling and tumor neoantigen discovery pipeline

To simplify I’ll consider that all the input files, including the genome and reads ones, are in the working directory and all the output files will be stored into the ‘output’ folder. All the comments are marked with ‘#’ to differentiate them from the commands. Programs will be installed in the standard ‘/opt’ folder in the Linux system and run with 30 threads when it is possible (number of parallel threads/processes should be adjusted depending of your server or computer). Analysis time will depend of the system and the number of parallel processes, in a small server with 30 threads it may take around 15-20 hours to be completed. Steps 8 and 10 are sill not fully automatized.

1. Normal and tumor reads mapping, sorting and indexing

# Index genome FASTA file for BWA mapping:
/opt/bwa-0.7.15/bwa index Homo_sapiens.GRCh38.dna.primary_assembly.fa

# Normal reads mapping, sorting and indexing with BWA-MEM:
/opt/bwa-0.7.15/bwa mem -M -v 1 -t 30 Homo_sapiens.GRCh38.dna.primary_assembly.fa NORMAL_R1.fastq.gz NORMAL_R2.fastq.gz | /opt/samtools-1.6/bin/samtools view -Sb -F 4 -@ 30 | /opt/samtools-1.6/bin/samtools sort -@ 30 > output/NORMAL.bam ; /opt/samtools-1.6/bin/samtools index -@ 30 output/NORMAL.bam output/NORMAL.bai
# Tumor reads mapping, sorting and indexing:
/opt/bwa-0.7.15/bwa mem -M -v 1 -t 30 Homo_sapiens.GRCh38.dna.primary_assembly.fa TUMOR_R1.fastq.gz TUMOR_R2.fastq.gz | /opt/samtools-1.6/bin/samtools view -Sb -F 4 -@ 30 | /opt/samtools-1.6/bin/samtools sort -@ 30 > output/TUMOR.bam ; /opt/samtools-1.6/bin/samtools index -@ 30 output/TUMOR.bam output/TUMOR.bai

2. Removal of duplicated reads

# Removes normal duplicated reads:
java -jar /opt/picard-2.14.0/build/libs/picard.jar MarkDuplicates VERBOSITY=ERROR VALIDATION_STRINGENCY=SILENT REMOVE_DUPLICATES=true I=output/NORMAL.bam O=output/NORMAL.dups_removed.bam M=output/NORMAL.metrics.txt
# Removes tumor duplicated reads:
java -jar /opt/picard-2.14.0/build/libs/picard.jar MarkDuplicates VERBOSITY=ERROR VALIDATION_STRINGENCY=SILENT REMOVE_DUPLICATES=true I=output/TUMOR.bam O=output/TUMOR.dups_removed.bam M=output/TUMOR.metrics.txt

3. Adding sample/group information to the mapped reads

# Adds normal read group, the name of the individual and the phenotype (3998_normal):
java -jar /opt/picard-2.14.0/build/libs/picard.jar AddOrReplaceReadGroups VERBOSITY=ERROR VALIDATION_STRINGENCY=SILENT CREATE_INDEX=true SO=coordinate RGID=3998_normal RGLB=3998_normal RGPL=illumina RGPU=3998_normal RGSM=3998_normal I=output/NORMAL.dups_removed.bam O=output/NORMAL.grouped.bam
# Adds tumor read group, the name of the individual and the phenotype (3998_tumor):
java -jar /opt/picard-2.14.0/build/libs/picard.jar AddOrReplaceReadGroups VERBOSITY=ERROR VALIDATION_STRINGENCY=SILENT CREATE_INDEX=true SO=coordinate RGID=3998_tumor RGLB=3998_tumor RGPL=illumina RGPU=3998_tumor RGSM=3998_tumor I=output/TUMOR.dups_removed.bam O=output/TUMOR.grouped.bam

4. Realignment around indels

# Normal reads realignment around indels using the GATK modules RealignerTargetCreator and IndelRealigner:
java -jar /opt/gatk-3.8/GenomeAnalysisTK.jar -T RealignerTargetCreator -nt 30 -R Homo_sapiens.GRCh38.dna.primary_assembly.fa -I output/NORMAL.grouped.bam -known /Mills_and_1000G_gold_standard.indels.b38.primary_assembly.vcf.gz -o output/NORMAL.intervals ; java -jar /opt/gatk-3.8/GenomeAnalysisTK.jar -T IndelRealigner -maxReads 100000 -maxInMemory 1000000 -R Homo_sapiens.GRCh38.dna.primary_assembly.fa -targetIntervals output/NORMAL.intervals -I output/NORMAL.grouped.bam -known /Mills_and_1000G_gold_standard.indels.b38.primary_assembly.vcf.gz -o output/NORMAL.realign.bam
# Tumor reads realignment around indels using the GATK modules RealignerTargetCreator and IndelRealigner:
java -jar /opt/gatk-3.8/GenomeAnalysisTK.jar -T RealignerTargetCreator -nt 30 -R Homo_sapiens.GRCh38.dna.primary_assembly.fa -I output/TUMOR.grouped.bam -known /Mills_and_1000G_gold_standard.indels.b38.primary_assembly.vcf.gz -o output/TUMOR.intervals ; java -jar /opt/gatk-3.8/GenomeAnalysisTK.jar -T IndelRealigner -maxReads 100000 -maxInMemory 1000000 -R Homo_sapiens.GRCh38.dna.primary_assembly.fa -targetIntervals output/TUMOR.intervals -I output/TUMOR.grouped.bam -known /Mills_and_1000G_gold_standard.indels.b38.primary_assembly.vcf.gz -o output/TUMOR.realign.bam

5. Somatic variant calling with Mutect2 (GATK tool)

# Creates dbSNP VCF index file required by GATK:
gunzip /All_20170710.vcf.gz ; bgzip /All_20170710.vcf ; tabix -p vcf /All_20170710.vcf.gz
# Base quality score recalibration for normal reads alignments (BQSR):
java -jar /opt/gatk-3.8/GenomeAnalysisTK.jar -T BaseRecalibrator -nct 30 -knownSites /All_20170710.vcf.gz -R Homo_sapiens.GRCh38.dna.primary_assembly.fa -I output/NORMAL.realign.bam -o output/NORMAL.bqsr ; java -jar /opt/gatk-3.8/GenomeAnalysisTK.jar -T PrintReads -nct 30 -BQSR output/NORMAL.bqsr -R Homo_sapiens.GRCh38.dna.primary_assembly.fa -I output/NORMAL.realign.bam -o output/NORMAL.recalib.bam
# Base quality score recalibration for tumor reads alignments (BQSR):
java -jar /opt/gatk-3.8/GenomeAnalysisTK.jar -T BaseRecalibrator -nct 30 -knownSites /All_20170710.vcf.gz -R Homo_sapiens.GRCh38.dna.primary_assembly.fa -I output/TUMOR.realign.bam -o output/TUMOR.bqsr ; java -jar /opt/gatk-3.8/GenomeAnalysisTK.jar -T PrintReads -nct 30 -BQSR output/TUMOR.bqsr -R Homo_sapiens.GRCh38.dna.primary_assembly.fa -I output/TUMOR.realign.bam -o output/TUMOR.recalib.bam

# Calls somatic variants (SNPs and indels) via local re-assembly of haplotypes with MuTect2:
java -jar /opt/gatk-3.8/GenomeAnalysisTK.jar -T MuTect2 -nct 30 -D /All_20170710.vcf.gz -cosmic /CosmicCodingMuts.vcf.gz -R Homo_sapiens.GRCh38.dna.primary_assembly.fa -I:normal output/NORMAL.recalib.bam -I:tumor output/TUMOR.recalib.bam -o output/normal_vs_tumor.mutect2.all.vcf

6. Filtering of somatic variants

MuTect2 may retrieve more than 10000 variants, but after filtering the number is reduced around 8-10 times.

# Filters somatic variants called by each method:
filter_somatic_variants.pl -pass -vardepth 10 -varfreq 0.1 -i output/normal_vs_tumor.mutect2.all.vcf -o output/normal_vs_tumor.mutect2.somatic.vcf

7. Annotation of filtered somatic variants

If we obtained around 1000 variants after previous filtering step, the annotation of only variants present in coding regions will reduce the number to few hundreds.

# Annotates final variants and prints the details in a tabulated format file:
annotate_variants.pl -i output/normal_vs_tumor.mutect2.somatic.vcf > output/normal_vs_tumor.mutect2.somatic.txt

8. Neoantigen peptides design

# Creates a FASTA file with the sequences of the peptides (13 aa) containing the somatic mutations previously detected
variants_to_peptides.pl -l 13 -i output/normal_vs_tumor.mutect2.somatic.vcf -o output/normal_vs_tumor.mutect2.peptides.fa

9. HLA-typing with exome or RNA-seq data

# Analyzes WES, or RNA-seq if available, data and retrieves the HLA alleles of the patient that will be used in the last step to calculate the affinities of the peptides.
ampliHLA.pl -thr 30 -ty mapping -r gen -i NORMAL_R1.fastq.gz NORMAL_R2.fastq.gz > amplihla_normal.txt
ampliHLA.pl -thr 30 -ty mapping -r gen -i TUMOR_R1.fastq.gz TUMOR_R2.fastq.gz > amplihla_tumor.txt

10. HLA-binding affinity calculation and peptide rankin

# Calculates and ranks the MHC-binding affinity
netMHCpan -BA -xls -f output/normal_vs_tumor.mutect2.peptides.fa -a HLA-A01:01,HLA-A02:01,HLA-B15:01...


Finally I didn’t get any job in Spain, but I learned a lot 🙂

 

Amplicon sequencing and high-throughput genotyping – HLA typing

In the previous post I explained the fundamentals about the Amplicon Sequencing  (AS) technique, today I will show some current and future applications in HLA-typing.

Other field of use of AS is the genotyping of complex gene families. For example, the major histocompatibility complex (MHC). This gene family is known to be highly polymorphic (high allele variation) and to have multiple copies of an ancestor gene (paralogues). MHC genes of class I and II codify the cellular receptors that present antigens to immune cells. MHC in humans is also called human leukocyte antigen (HLA). HLA-typing has a key role in the compatibility upon any tissue transplantation and has been associated with more than 100 different diseases (primarily autoimmune diseases) and recently is associated to various drug positive and negative responses. HLA loci are so polymorphic that there are not 2 individuals in a non-endogamic population with the same set of alleles (except twins).

Number of HLA alleles known up to date. Source: IMGT-HLA database

As in personalized medicine and metagenomics/metabarcoding, there are 2 approaches for NGS HLA-typing: the first is to use the whole genomic, exomic or transcriptome data and the second is to amplify specific HLA loci regions by amplicon sequencing. Second approach is suitable for typing hundreds/thousands of individuals but requires tested primers for multiplex PCR of HLA regions.

Basically the HLA-typing analysis workflow after sequencing the PCR products, consists in:

  1. Map/align the reads against HLA allele reference sequences from the IMGT-HLA public database.
  2. Retrieve the genotypes from the references with longer and better mapping scores.

Inoue et al. wrote a complete review about the topic in ‘The impact of next-generation sequencing technologies on HLA research‘.

HLA-typing workflow. Modified from Inoue et al.

Nowadays there are commercial kits that allow reliable, fast and economic HLA-typing: Illumina TruSight HLA v2, Omixon Holotype HLA, GenDx NGSgo or One Lambda NXType NGS.

Amplicon sequencing and high-throughput genotyping – Metagenomics

In the previous post I explained the fundamentals about the Amplicon Sequencing  (AS) technique, today I will show some current and future applications in Metagenomics and Metabarcoding.

Metagenomics (also referred to as ‘environmental’ or ‘community’ genomics) is the study of genetic material recovered directly from environmental samples. This discipline applies a suite of genomic technologies and bioinformatics tools to directly access the genetic content of entire communities of organisms. Usually we use the term metabarcoding when we apply the amplicon sequencing approach in metagenomics studies, also metagenomics term if preferred when we study full genomes, not only few gene regions.

Metabarcoding workflow. Source: http://www.naturemetrics.co.uk

For metabarcoding, 16S rRNA gene is the most common universal DNA barcode (marker) used to identify with great accuracy species from across the Tree of Life, but other genes as: cytochrome c oxidase subunit 1 (CO1), rRNA (16S/18S/28S), plant specific ones (rbcL, matK, and trnH-psbA) and gene regions as: internal transcribed spacers (ITSs) (Kress et al. 2014; Joly et al. 2014). The previous genes have mutation rates fast enough to discriminate close species and at the same time they are stable enough to identify individuals of the same specie.

Prokaryotic and eukaryotic rRNA operons

A perfect metagenomics barcode/marker should…

  • be present in all the organisms, in all the cells
  • have variable sequence among different species
  • be conserved among individuals of the same species
  • be easy to amplify and not too long for sequencing

Recommended DNA barcodes for metagenomics

The pioneer metabarcoding study of Sogin et al. 2006 to decipher the microbial diversity in the deep sea used as barcode the V6 hypervariable region of the rRNA gene. Sogin et al. sequenced around 118,000 PCR amplicons from environmental DNA preparations and unveiled thousand of new species not known before.

Observe that in metabarcoding we cannot use DNA tags to pick out single individuals, but to identify different samples (of water, soil, air…).

Amplicon sequencing and high-throughput genotyping – Basics

Amplicon sequencing  (AS) technique consists in sequencing the products from multiple PCRs (amplicons). Where a single amplicon is the set of DNA sequences obtained in each individual PCR.

Before the arrival of high-throughput sequencing technologies, PCR products were Sanger sequenced individually. But Sanger sequencing is only able to resolve one DNA sequence (allele) per sample, or as maximum a mix of 2 sequences differing only in one nucleotide (usually heterozygous alleles):

Sanger sequencing limitations

The solution to the previous limitation was bacterial cloning and further isolation, amplification and sequencing of individual clones. Nevertheless, bacterial cloning is a time-consuming approach that is only feasible with few dozens of sequences.

Bacterial cloning to Sanger sequence clones individually

 

Fortunately, high-throughput sequencing techniques (HTS) also called next-generation sequencing (NGS) are able to sequence millions of sequences with individual resolution. And the combination of amplicon sequencing with NGS allows us to genotype hundreds/thousands of samples in a single experiment.

The only requirement to carry out such kind of experiment is to include different DNA tags to identify the individuals/samples in the experiment. A DNA tag is a short and unique sequence of nucleotides (ex. ACGGTA) that is attached to the 5′ end of any PCR primer or ligated after individual PCR. Each tag should be different for each sample/individual to analyze to be able to classify the sequences or reads from each individual PCR (amplicon) (Binladen et al. 2007; Meyer et al. 2007).

High-throughput amplicon sequencing schema

Again, NGS has other intrinsic problems: sequenced fragments are shorter than in Sanger and sequencing errors are more frequent. Random sequencing errors can be corrected by increasing the depth/coverage: reading more times the same sequence. And longer sequences can be split in shorter fragments and assembled together later by computer.

In the following link you can watch a video explaining the amplicon sequencing process using NGS:
http://www.jove.com/video/51709/next-generation-sequencing-of-16s-ribosomal-rna-gene-amplicons

 

Basically there are 4 basic steps in an NGS Amplicon Sequencing analysis:

  1. Experimental design of the primer sequences to amplify the desired gene regions (markers) and of the DNA tags to use to identify samples or individuals.
  2. PCR amplification of the markers, usually an individual PCR per sample.
  3. NGS sequencing of the amplification products. The most commonly used techniques are: Illumina, Ion Torrent and 454, also Pac Bio is increasing in importance as its error rate decreases.
  4. Bioinformatic analysis of the sequencing data. The analysis should consists in: classification of reads into amplicons, sequencing error correction, filtering of spurious and contaminant reads, and final displaying of results in a human readable way, ex. an Excel sheet.

Amplicon Sequencing 4 steps workflow

Metagenomics workshop in Evora

I want to announce that on 19th October I’ll teach the workshop titled “Introduction to Bioinformatics applied to Metagenomics and Community Ecology” during the conference Community Ecology for the 21st Century (Évora, Portugal).

In this workshop I’ll introduce the new tool called AmpliTAXO that allows an online, easy and automated analysis of NGS data from ribosomal RNA and other genetic markers used in metagenomics.

If you are interested, you can contact here with the conference organizers to join the workshop or the full conference, there are still available places in the workshop.

The workshop will consist in two modules, in the first, will be exposed the metagenomics fundamentals, challenges, the technical advances of the high-throughput sequencing techniques and the analysis pipeline of the most used tools (UPARSE, QIIME, MOTHUR). The second part will be practical and we will perform an analysis of real metagenomic data from NGS experiments.

Continue reading

List of helpful Linux commands to process FASTQ files from NGS experiments

ngs_analysisHere I’ll summarize some Linux commands that can help us to work with millions of DNA sequences from New Generation Sequencing (NGS).

A file storing biological sequences with extension ‘.fastq’ or ‘.fq’ is a file in FASTQ format, if it is also compressed with GZIP  the suffix will be ‘.fastq.gz’ or ‘.fq.gz’. A FASTQ file usually contain millions of sequences and takes up dozens of Gigabytes in a disk. When these files are compressed with GZIP their sizes are reduced in more than 10 times (ZIP format is less efficient).

In the next lines I’ll show you some commands to deal with compressed FASTQ files, with minor changes they also can be used with uncompressed ones and FASTA format files.

To start, let’s compress a FASTQ file in GZIP format:

> gzip reads.fq

The resulting file will be named ‘reads.fq.gz’ by default.

If we want to check the contents of the file we can use the command ‘less’ or ‘zless’:

> less reads.fq.gz
> zless reads.fq.gz

And to count the number of sequences stored into the file we can count the number of lines and divide by 4:

> zcat reads.fq.gz | echo $((`wc -l`/4))
  256678360

If the file is in FASTA format, we will count the number of sequences like this:

> grep -c "^>" reads.fa

Continue reading

© 2024 Sixth researcher

Theme by Anders NorenUp ↑