Sixth researcher

Another Bioinformatics website

When students outperform their teacher – Part 1

This post is a tribute to the work of my students in the subject “Basic Python Programming for Scientists, as they did a great job in their final projects I want to show you here some examples that could be useful for many Python beginners. Remember that you can download the course slides from Didactic Materials sections.

First of all, I want so say that it was my first experience as Python Programming teacher and I was gratefully surprised with my students. I have learned a lot with them and I can say that some of them outperformed the teacher. I also want to thank to Medhat Helmy and Piotr Bentkowski for helping me in the teaching task.  Unfortunately I cannot have the same opinion from the Adam Mickiewicz University that didn’t pay me any money for teaching, welcome to Poland…

Today I’ll show you 2 projects that overcame all my expectations: Image Quiz from Błażej Szymański and Temperature converter by Michal Stachowiak.. In next days I’ll publish more incredible projects.

Image Quiz

Click to play Image Quiz online

Click to play Image Quiz online

Maybe the project that impressed me the most was ‘Image Quiz’ by Błażej Szymański. He was planning at the beginning to write a python script that automatically downloads different airplane model images from Google, and later use these images with an HTML+JavaScript interface to play a quiz to guess which is the correct airplane model shown in a randomly selected image. I suggested some minor changes, like using a more biological topic as bird species instead of airplanes. And now you can check yourself the final result playing the Image Quiz online!

The project consists basically in 4 python scripts:

  • reads a file ‘list.txt’ containing the names of the birds to look for and use the Google API to retrieve the bird photo URLs that are stored as JSON format in ‘data.txt’.
  • retrieves the image files and saves them at ‘pictures’ folder using their URLs stored in ‘data.txt’.
  • checks if the images are larger than 800px, if so they are resized.
  • saves a dictionary with bird names as keys and arrays with the image locations as values in JSON format into ‘result.txt’. This JSON file will be used later by a JavaScript function to prepare the quiz.

import requests
import json
import os

result = []
question = []
with open("list.txt","r") as file:
    for line in file:

for q in question:
    stuff = {"key":"WRITE HERE YOUR GOOGLE API KEY",
             "excludeTerms":"stock" #to get rid of stock images
    j = requests.get("", params=stuff)
    r = json.loads(j.text)
    x = []
    for each in r["items"]:
dictionary = dict(zip(question,result))

with open('data.txt', 'w') as resultfile:
    json.dump(dictionary, resultfile)

import os
import requests
import shutil
import json

with open("data.txt","r") as data:
    dictionary = json.load(data)

for key in dictionary.keys():
    for index, each in enumerate(dictionary[key]):
        r = requests.get(each, stream=True)
        folder = "./pictures/{pic}".format(pic=key)
        os.makedirs(folder, exist_ok=True)
        print("Retrieving image {} from {}".format(index+1,key))

        with open(folder + "/" + str(index) + ".jpeg", "wb") as file:
            shutil.copyfileobj(r.raw, file)

print("\nPlease review downloaded pictures and delete invalid ones.")

import os
from PIL import Image

for thing in os.listdir("./pictures"):
    for model in os.listdir("./pictures/" + thing):
        if model == "Thumbs.db": #to avoid windows thumbnails
        im ="./pictures/" + thing + "/" + model)
        if im.size[1] > 800:
            ratio = im.size[0]/im.size[1]
            x = ((round(ratio*800)),800)
            print(model,"resized to",x)
            im = im.resize(x,Image.ANTIALIAS)
  "./pictures/" + thing + "/" + model)

import os
import json

keys = []
temp = []
values = []

for thing in os.listdir("./pictures"):

    for file in os.listdir("./pictures"+"/"+thing):
        if file != "Thumbs.db":
dictionary = dict(zip(keys,values))

with open("result.txt", "w") as result:
    json.dump(dictionary, result)

Temperature converter

‘Temperature converter’ by Michal Stachowiak, is a script that converts temperature values in different scales and saves the results in memory.

He implemented temperature conversions between Kelvin, Celsius and Fahrenheit scales together with a database to store and modify the results. Here I’ll show a reduced version of the script that only converts from Celsius to Kelvin.


def C_on_K (C):
    K = C + 273.15
    return K


print("       ===========================")
print("          Temperature Converter")
print("       ===========================\n\n")

print("Input and output are stored in a database\n\n")

gearbox = 0
index = 0 
while gearbox < 100: #You can execute this program 100 times print("Choose the converter:") print("----------------------") print(" 1. C => K")
    print("Chose the action:")
    print(" 7. Show all records")
    print(" 8. Show specific record")
    print(" 9. Edit specific record")
    print("10. Delete all records")
    print("11. Delete specific record")
    print("12. Close the program")
    print("\nChoice 1-12:")
    answer = input()
    if answer == "1":
        controler = 1
        while controler == 1:
            print ("Provide the temperature in Celsius degree:")
            C = float(input())
            if C < -273: #there is no C temp below that value, so ask again for proper temperature
                print ("There is no temperature in C degree below -273")
                print ("Please provide proper C temperature")
                controler = 1
                equation5 = C_on_K(C)
                print( "{:.2f}C degree is {:.2f}K".format(C,equation5))
                controler = 0

        gearbox += 1
        print("Press any key to continue...")

    elif answer == "12":
        print("Thank You for using Temperature Converter")
        print("If You enjoy the program, I appreciate any donations")
        print("Press any key to quit...")

        print("Please provide number from 1 to 12")
        gearbox += 1

Python 3 for scientists course and other didactic materials at SixthResearcher

I want to share with you the new section of Didactic Materials at the Sixth Researcher website.

In this section I will include courses, presentations, workshops and other materials that I prepared and could be useful for other researchers.

At the moment you can download 10 lessons with the fundamentals of Python 3 for biologists and other scientists that I imparted at UAM.

Also is available a Metabarcoding workshop with the fundamentals of the technique and a practical example explaining the bioinformatics analysis of the data.

The didactic materials in this new section will be licensed as Creative Commons Attribution-NonCommercial.

Python for Scientists

Materials from the course Basic Python 3 Programming for Scientists imparted at Adam Mickiewicz University:

Metabarcoding Bioinformatics analysis

Materials from the workshop Introduction to Bioinformatics analysis of Metabarcoding data:


Counting blue and white bacteria colonies with Python and OpenCV

Last week I was teaching to my Polish students how to use the Python packages NumPy, SciPy and Matplotlib for scientific computing. Most of the examples were about numerical calculations, data visualization and generation of graphs and figures. If you are interested in the topic check the following links with nice tutorials and examples for NumPy, SciPy and Matplotlib.

At the end of the lesson we also explored the capabilities of the scipy.ndimage module for image processing as shown in this nice tutorial. After all, images are pixel matrices that may be represented as NumPy arrays.

After lesson my curiosity led me to OpenCV (Open Source Computer Vision Library), an open-source library for computer vision that includes several hundreds of algorithms.

It is highly recommended to install the last OpenCV version, but you should compile the code yourself as explained here. To use OpenCV in Python, just install its wrapper with PIP installer: pip install opencv-python and import it in any script as: import cv2. In this way you will be able to use any algorithm from OpenCV as Python native but in the background they will be executed as C/C++ code that will make image processing must faster.

After the technical introduction, let’s go to the interesting stuff…

Figure 1. Original blue and white bacteria colonies in Petri dish.

Let’s imagine that we are working at the lab trying to optimize a new cloning protocol. We have dozens of Petri dish with transformed bacteria and we want to check and quantify the transformation efficiency. Each Petri dish will contain blue and white bacteria colonies, white ones will be the bacteria that have inserted our vector disrupting the lacZ gene that generates the blue color.

We want to take photos of the Petri dishes, transfer them to the computer and use a Python script to count automatically the number of blue and white colonies.

For example, we will analyze one image (Figure 1: ‘colonies01.png’) running the following command:

> python -i colonies01.png -o colonies01.out.png
   30 blue colonies
   17 white colonies

Figure 2. Blue and white bacteria colonies are marked in red and green colors respectively as recognized by the Python script.

It will print the number of colonies of each type (blue and white) and it will create an output image (Figure 2: ‘colonies01.out.png’) where blue colonies are marked in red color and white ones in green.

Before showing the code I’ll do a few remarks:

  • Code works quite well but it is not perfect, it fails to recognize 2 small white colonies and also groups other 2 small colonies with 2 bigger ones of the same color. Finally, the script counts 30 blue and 17 white colonies.
  • One of the most tricky parts of the code are the color boundaries to recognize blue and white spots. These thresholds have been manually adjusted (with Photoshop help) before the analysis and they could change with different camera illumination conditions.
  • White colonies are more difficult to identify because their colors are grayish and similar colors are in the blue colonies edges and background. For that reason, image colors are inverted previously to white colonies analysis for an easier recognition.
  • It’s not AI (Artificial Intelligence). I’d call it better ‘Human Intelligence’ because the recognition algorithm thresholds are manually trained. If you are interested in AI and image recognition I can suggest to read the recent article in Nature about skin cancer classification with deep neural networks.
  • I wanted to show a scientific application of image processing, but many other examples are available in internet: recognizing Messi face in a photo, classifying Game Boy cartridges by color

And here is the commented code that performs the magic:

# -*- coding: utf-8 -*-
Bacteria counter

    Counts blue and white bacteria on a Petri dish

    python -i [imagefile] -o [imagefile]

@author: Alvaro Sebastian (

# import the necessary packages
import numpy as np
import argparse
import imutils
import cv2
# construct the argument parse and parse the arguments
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True,
    help="path to the input image")
ap.add_argument("-o", "--output", required=True,
    help="path to the output image")
args = vars(ap.parse_args())
# dict to count colonies
counter = {}

# load the image
image_orig = cv2.imread(args["image"])
height_orig, width_orig = image_orig.shape[:2]

# output image with contours
image_contours = image_orig.copy()

colors = ['blue', 'white']
for color in colors:

    # copy of original image
    image_to_process = image_orig.copy()

    # initializes counter
    counter[color] = 0

    # define NumPy arrays of color boundaries (GBR vectors)
    if color == 'blue':
        lower = np.array([ 60, 100,  20])
        upper = np.array([170, 180, 150])
    elif color == 'white':
        # invert image colors
        image_to_process = (255-image_to_process)
        lower = np.array([ 50,  50,  40])
        upper = np.array([100, 120,  80])

    # find the colors within the specified boundaries
    image_mask = cv2.inRange(image_to_process, lower, upper)
    # apply the mask
    image_res = cv2.bitwise_and(image_to_process, image_to_process, mask = image_mask)

    ## load the image, convert it to grayscale, and blur it slightly
    image_gray = cv2.cvtColor(image_res, cv2.COLOR_BGR2GRAY)
    image_gray = cv2.GaussianBlur(image_gray, (5, 5), 0)

    # perform edge detection, then perform a dilation + erosion to close gaps in between object edges
    image_edged = cv2.Canny(image_gray, 50, 100)
    image_edged = cv2.dilate(image_edged, None, iterations=1)
    image_edged = cv2.erode(image_edged, None, iterations=1)

    # find contours in the edge map
    cnts = cv2.findContours(image_edged.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
    cnts = cnts[0] if imutils.is_cv2() else cnts[1]

    # loop over the contours individually
    for c in cnts:
        # if the contour is not sufficiently large, ignore it
        if cv2.contourArea(c) < 5:
        # compute the Convex Hull of the contour
        hull = cv2.convexHull(c)
        if color == 'blue':
            # prints contours in red color
        elif color == 'white':
            # prints contours in green color

        counter[color] += 1
        #cv2.putText(image_contours, "{:.0f}".format(cv2.contourArea(c)), (int(hull[0][0][0]), int(hull[0][0][1])), cv2.FONT_HERSHEY_SIMPLEX, 0.65, (255, 255, 255), 2)

    # Print the number of colonies of each color
    print("{} {} colonies".format(counter[color],color))

# Writes the output image

Amplicon sequencing and high-throughput genotyping – HLA typing

In the previous post I explained the fundamentals about the Amplicon Sequencing  (AS) technique, today I will show some current and future applications in HLA-typing.

Other field of use of AS is the genotyping of complex gene families. For example, the major histocompatibility complex (MHC). This gene family is known to be highly polymorphic (high allele variation) and to have multiple copies of an ancestor gene (paralogues). MHC genes of class I and II codify the cellular receptors that present antigens to immune cells. MHC in humans is also called human leukocyte antigen (HLA). HLA-typing has a key role in the compatibility upon any tissue transplantation and has been associated with more than 100 different diseases (primarily autoimmune diseases) and recently is associated to various drug positive and negative responses. HLA loci are so polymorphic that there are not 2 individuals in a non-endogamic population with the same set of alleles (except twins).

Number of HLA alleles known up to date. Source: IMGT-HLA database

As in personalized medicine and metagenomics/metabarcoding, there are 2 approaches for NGS HLA-typing: the first is to use the whole genomic, exomic or transcriptome data and the second is to amplify specific HLA loci regions by amplicon sequencing. Second approach is suitable for typing hundreds/thousands of individuals but requires tested primers for multiplex PCR of HLA regions.

Basically the HLA-typing analysis workflow after sequencing the PCR products, consists in:

  1. Map/align the reads against HLA allele reference sequences from the IMGT-HLA public database.
  2. Retrieve the genotypes from the references with longer and better mapping scores.

Inoue et al. wrote a complete review about the topic in ‘The impact of next-generation sequencing technologies on HLA research‘.

HLA-typing workflow. Modified from Inoue et al.

Nowadays there are commercial kits that allow reliable, fast and economic HLA-typing: Illumina TruSight HLA v2, Omixon Holotype HLA, GenDx NGSgo or One Lambda NXType NGS.

Amplicon sequencing and high-throughput genotyping – Metagenomics

In the previous post I explained the fundamentals about the Amplicon Sequencing  (AS) technique, today I will show some current and future applications in Metagenomics and Metabarcoding.

Metagenomics (also referred to as ‘environmental’ or ‘community’ genomics) is the study of genetic material recovered directly from environmental samples. This discipline applies a suite of genomic technologies and bioinformatics tools to directly access the genetic content of entire communities of organisms. Usually we use the term metabarcoding when we apply the amplicon sequencing approach in metagenomics studies, also metagenomics term if preferred when we study full genomes, not only few gene regions.

Metabarcoding workflow. Source:

For metabarcoding, 16S rRNA gene is the most common universal DNA barcode (marker) used to identify with great accuracy species from across the Tree of Life, but other genes as: cytochrome c oxidase subunit 1 (CO1), rRNA (16S/18S/28S), plant specific ones (rbcL, matK, and trnH-psbA) and gene regions as: internal transcribed spacers (ITSs) (Kress et al. 2014; Joly et al. 2014). The previous genes have mutation rates fast enough to discriminate close species and at the same time they are stable enough to identify individuals of the same specie.

Prokaryotic and eukaryotic rRNA operons

A perfect metagenomics barcode/marker should…

  • be present in all the organisms, in all the cells
  • have variable sequence among different species
  • be conserved among individuals of the same species
  • be easy to amplify and not too long for sequencing

Recommended DNA barcodes for metagenomics

The pioneer metabarcoding study of Sogin et al. 2006 to decipher the microbial diversity in the deep sea used as barcode the V6 hypervariable region of the rRNA gene. Sogin et al. sequenced around 118,000 PCR amplicons from environmental DNA preparations and unveiled thousand of new species not known before.

Observe that in metabarcoding we cannot use DNA tags to pick out single individuals, but to identify different samples (of water, soil, air…).

Amplicon sequencing and high-throughput genotyping – Basics

Amplicon sequencing  (AS) technique consists in sequencing the products from multiple PCRs (amplicons). Where a single amplicon is the set of DNA sequences obtained in each individual PCR.

Before the arrival of high-throughput sequencing technologies, PCR products were Sanger sequenced individually. But Sanger sequencing is only able to resolve one DNA sequence (allele) per sample, or as maximum a mix of 2 sequences differing only in one nucleotide (usually heterozygous alleles):

Sanger sequencing limitations

The solution to the previous limitation was bacterial cloning and further isolation, amplification and sequencing of individual clones. Nevertheless, bacterial cloning is a time-consuming approach that is only feasible with few dozens of sequences.

Bacterial cloning to Sanger sequence clones individually


Fortunately, high-throughput sequencing techniques (HTS) also called next-generation sequencing (NGS) are able to sequence millions of sequences with individual resolution. And the combination of amplicon sequencing with NGS allows us to genotype hundreds/thousands of samples in a single experiment.

The only requirement to carry out such kind of experiment is to include different DNA tags to identify the individuals/samples in the experiment. A DNA tag is a short and unique sequence of nucleotides (ex. ACGGTA) that is attached to the 5′ end of any PCR primer or ligated after individual PCR. Each tag should be different for each sample/individual to analyze to be able to classify the sequences or reads from each individual PCR (amplicon) (Binladen et al. 2007; Meyer et al. 2007).

High-throughput amplicon sequencing schema

Again, NGS has other intrinsic problems: sequenced fragments are shorter than in Sanger and sequencing errors are more frequent. Random sequencing errors can be corrected by increasing the depth/coverage: reading more times the same sequence. And longer sequences can be split in shorter fragments and assembled together later by computer.

In the following link you can watch a video explaining the amplicon sequencing process using NGS:


Basically there are 4 basic steps in an NGS Amplicon Sequencing analysis:

  1. Experimental design of the primer sequences to amplify the desired gene regions (markers) and of the DNA tags to use to identify samples or individuals.
  2. PCR amplification of the markers, usually an individual PCR per sample.
  3. NGS sequencing of the amplification products. The most commonly used techniques are: Illumina, Ion Torrent and 454, also Pac Bio is increasing in importance as its error rate decreases.
  4. Bioinformatic analysis of the sequencing data. The analysis should consists in: classification of reads into amplicons, sequencing error correction, filtering of spurious and contaminant reads, and final displaying of results in a human readable way, ex. an Excel sheet.

Amplicon Sequencing 4 steps workflow

Python 3 reference cheat sheet for beginners

Today I want to share a Python 3 reference/cheat sheet that I prepared for my students, I tried to collect in two DIN-A4 sides the most popular Python 3 data types, operators, methods, functions and other useful staff that people learning Python need at the first months and even later.

You can download the updated version of the reference sheet in PDF format from the Didactic Materials section.

Python 3 reference cheat sheet for begginers

Python 3 reference cheat sheet for begginers

When Blast is not Blast

blastnBLAST changed some time ago for better and faster to BLAST+ version, but along the way small differences were introduced that may confound more than one.

Old BLAST was run with the command ‘blastall. In this way, a protein alignment should be called as ‘blastall -p blastp‘ and the same for nucleotides should be ‘blastall -p blastn‘, and if we want to use MEGABLAST then it exists the different command ‘megablast‘.

Nevertheless, in the ‘new’ BLAST+ version, the alignments of proteins and nucleotides were splitted in two different commands: ‘blastp‘ and ‘blastn‘ (see manual).

Till here, everything looks natural and logic, but not everyone knows that THE DEFAULT OPTION IN THE NEW BLASTN COMMAND IS MEGABLAST.

If we run ‘blastn -help‘ we will obtain the following explanation:

 *** General search options
 -task <String, Permissible values: 'blastn' 'blastn-short' 'dc-megablast'
                'megablast' 'rmblastn' >
   Task to execute
   Default = `megablast'

But not everyone is interested in increasing the alignment speed, because many of us that still use Blastn is because we appreciate its great sensitivity to detect short local alignments. Nowadays Blast can align thousands of sequences in a reasonable time of minutes or even seconds. To increase the alignment speed in alignments with millions of sequences involved there are other better alternatives as Bowtie2.

As CONCLUSION: if we use Blastn and we are interested in a high sensitivity, we should run it as:

blastn -task blastn

If not, we will execute MEGABLAST and we can lose a great sensitivity and risk to not to find the expected alignments.

As example, let’s search homology between the human 2’beta microglobuline (NM_004048.2) and murine one (NM_009735.3) using online version of Blastn with default options (‘Highly similar sequences – megablast’):

But if we change the search option to ‘Somewhat similar sequences (blastn)’:

It is a big difference, megablast does not find any significant similarity, whereas blastn outputs a high similarity alignment with an E-value of 1.5E-56!!!!

You can read my original post in Spanish at ‘Bioinfoperl’ blog.

Metagenomics workshop in Evora

I want to announce that on 19th October I’ll teach the workshop titled “Introduction to Bioinformatics applied to Metagenomics and Community Ecology” during the conference Community Ecology for the 21st Century (Évora, Portugal).

In this workshop I’ll introduce the new tool called AmpliTAXO that allows an online, easy and automated analysis of NGS data from ribosomal RNA and other genetic markers used in metagenomics.

If you are interested, you can contact here with the conference organizers to join the workshop or the full conference, there are still available places in the workshop.

The workshop will consist in two modules, in the first, will be exposed the metagenomics fundamentals, challenges, the technical advances of the high-throughput sequencing techniques and the analysis pipeline of the most used tools (UPARSE, QIIME, MOTHUR). The second part will be practical and we will perform an analysis of real metagenomic data from NGS experiments.

Continue reading

List of helpful Linux commands to process FASTQ files from NGS experiments

ngs_analysisHere I’ll summarize some Linux commands that can help us to work with millions of DNA sequences from New Generation Sequencing (NGS).

A file storing biological sequences with extension ‘.fastq’ or ‘.fq’ is a file in FASTQ format, if it is also compressed with GZIP  the suffix will be ‘.fastq.gz’ or ‘.fq.gz’. A FASTQ file usually contain millions of sequences and takes up dozens of Gigabytes in a disk. When these files are compressed with GZIP their sizes are reduced in more than 10 times (ZIP format is less efficient).

In the next lines I’ll show you some commands to deal with compressed FASTQ files, with minor changes they also can be used with uncompressed ones and FASTA format files.

To start, let’s compress a FASTQ file in GZIP format:

> gzip reads.fq

The resulting file will be named ‘reads.fq.gz’ by default.

If we want to check the contents of the file we can use the command ‘less’ or ‘zless’:

> less reads.fq.gz
> zless reads.fq.gz

And to count the number of sequences stored into the file we can count the number of lines and divide by 4:

> zcat reads.fq.gz | echo $((`wc -l`/4))

If the file is in FASTA format, we will count the number of sequences like this:

> grep -c "^>" reads.fa

Continue reading

« Older posts

© 2017 Sixth researcher

Theme by Anders NorenUp ↑