30
Rec-I-DCM3: A Fast Algorithmic Technique for Reconstructing Large Evolutionary Trees Usman Roshan Department of Computer Science New Jersey Institute of Technology

Rec-I-DCM3: A Fast Algorithmic Technique for Reconstructing Large Evolutionary Trees Usman Roshan Department of Computer Science New Jersey Institute of

Embed Size (px)

Citation preview

Rec-I-DCM3: A Fast Algorithmic Technique for Reconstructing Large

Evolutionary TreesUsman Roshan

Department of Computer Science

New Jersey Institute of Technology

Phylogeny

Orangutan Gorilla Chimpanzee Human

From the Tree of the Life Website,University of Arizona

DNA Sequence Evolution

AAGACTT

TGGACTTAAGGCCT

-3 mil yrs

-2 mil yrs

-1 mil yrs

today

AGGGCAT TAGCCCT AGCACTT

AAGGCCT TGGACTT

TAGCCCA TAGACTT AGCGCTTAGCACAAAGGGCAT

AGGGCAT TAGCCCT AGCACTT

AAGACTT

TGGACTTAAGGCCT

AGGGCAT TAGCCCT AGCACTT

AAGGCCT TGGACTT

AGCGCTTAGCACAATAGACTTTAGCCCAAGGGCAT

Phylogeny Problem

TAGCCCA TAGACTT TGCACAA TGCGCTTAGGGCAT

U V W X Y

U

V W

X

Y

Why construct phylogenies?

Evolutionary history relates all organisms andgenes, and helps us understand• interactions between genes (genetic networks)• functions of genes• influenza vaccine development• origins and spread of disease• origins and migrations of humans• drug design

1. Hill-climbing heuristics for hard optimization criteria: Maximum Parsimony and Maximum Likelihood

Phylogenetic reconstruction methods

Phylogenetic trees

Cost

Global optimum

Local optimum

2. Polynomial time distance-based methods: Neighbor Joining, etc.

3. Bayesian methods

Maximum Parsimony (a.k.a Steiner Tree problem in phylogenetics)

• Input: Set S of n aligned sequences of length k

• Output: A phylogenetic tree T– leaf-labeled by sequences in S– additional sequences of length k labeling the

internal nodes of T

such that is minimized. )(),(

),(TEji

jiH

Very large tree space for MP and ML

• Number of (unrooted) binary trees on n leaves is (2n-5)!!

• If each tree on 1000 taxa could be analyzed in 0.001 seconds, we would find the best tree in

2890 millennia

#leaves #trees

4 3

5 15

6 105

7 945

8 10395

9 135135

10 2027025

20 2.2 x 1020

100 4.5 x 10190

1000 2.7 x 102900

Problems with heuristics for ML and MP

• Many software packages are available which implement heuristics for finding MP and ML trees: PAUP*, PHYLIP, mrBayes, TNT, …

• Heuristics for Maximum Parsimony (MP) and Maximum Likelihood (ML) cannot handle large datasets: get trapped in local optima

Problems with current heuristicsCurrent best technique for MP: TNT software package available from Pablo Goloboff. TNT is well above the best known score even after 168 hours = 7 days of computation. A separate study of ours shows that trees above 0.01% of “optimal” can differ significantly in structure, whereas those closer to the 0.01% threshold are topologically similar.

Our approach

Use Disk-Covering Methods (DCMs) to boost

the performance of existing best known

technique.

The Warnow et al. DCM2 technique for speeding up MP/ML searches

1. Decompose sequences into overlapping subproblems

2. Compute subtrees using a base method

3. Merge subtrees using the Strict Consensus Merge (SCM)

4. Refine to make the tree binary

Problems with DCM1 and DCM2

• DCM1 was designed to improve the statistical performance of distance based methods such as NJ. It does not help with MP or ML analyses – too many subproblems and much loss of resolution after merger

• DCM2 helps with MP and ML analyses, but only on some datasets – decomposition doesn’t reduce size enough and takes too long to compute

2. Find separator X in G which minimizes max where are the connected components of G – X

3. Output subproblems as .

DCM2• Input: distance matrix d,

threshold , sequences S

• Algorithm:1a. Compute a threshold graph G using q and d1b. Perform a minimum weight triangulation of G

DCM2 decomposition vs DCM3 decomposition (our new DCM)

DCM3• Input: guide-tree T on S,

sequences S• Algorithm:

1. Compute a short quartet graph G using T. The graph G is provably triangulated.

DCM3 advantage: it is faster and produces smaller subproblems than DCM2

iA|| iAX

}{ ijdq

iAX

DCM2 vs DCM3 (threshold graph vs short quartet graph)

DCM2 threshold graph• V={sequences}• E={(i,j): <= q }• q is at least the minimum

required to make the graph connected

• G can become very dense, especially given outliers, and thus produce large separators (order of 80%) and subproblems up to 90% in size!

ijd

DCM3 short quartet graph• V={sequences}

• E={(i,j): sequence i and j are in some short quartet}

• G is not so dense and any outliers will be in one short quartet only.

• Separators are small and subproblems at most 50% in size in practice.

ijd

Time to compute DCM3 decompositions

• An optimal DCM3 decomposition takes O(n3) to compute – same as for DCM2

• The centroid edge DCM3 decomposition can be computed in O(n2) time

• An approximate centroid edge decomposition can be computed in O(n) time

DCM3 decomposition – example

1. Locate the centroid edge e in (O(n) time)2. Set the closest leaves around e to be the separator (O(n) time)3. Remaining leaves in subtrees around e form the subsets (unioned with the separator)

Improving upon DCM3 tree: Iterative-DCM3

T

T’

DCM3Local search

Starting tree

Recursive-Iterative-DCM3

T

T’

Recursive-DCM3Local search

Starting tree

Experimental design

• Dataset: 10 real datasets ranging from 1127 to 13921 sequences (DNA and rRNA)

• Methods studied:– Recursive-Iterative-DCM3 (Rec-I-DCM3) with

1/4th and 1/8th subset sizes– TNT (combination of simulated annealing, divide-

and-conquer, and genetic algorithms)

• Five runs of each method:– Rec-I-DCM3: each run for 168 hours (1 week)– TNT: each run for 336 hours (2 weeks)

Results

1. Performance as a function of time on dataset of 13921 sequences

2. Comparison of scores found at 168 hours

3. Rec-I-DCM3 speedup over TNT

5 to 60 minutes on 13921 sequences

All three methods are well above best score in the first hour.

1 to 24 hours on 13921 sequences

Rec-I-DCM3 scores improve faster than TNT

1 to 336 hours on 13921 sequences

• Rapid improvement for both the methods in first 24 hours. • Rec-I-DCM3 continues to improve faster than TNT thereafter.

1 to 336 hours on 13921 sequenceswith all five runs plotted

• Plot of all five runs of each method show statistically sound results. • Similar behavior on all datasets.

Average percent above the best known score found to date on each dataset

24 hours 168 hours

At 168 hours, Rec-I-DCM3 scores improve by half (above optimal) whereas TNT improvement is slow.

(avg Rec-I-DCM3 time/avg TNT time) to reach avg TNT 2week score

Average Rec-I-DCM3 scores reach average TNT score 25 times faster on datasets 9, and 10, and 50 times faster on Datasets 6 and 8 than TNT.

Software

• Open source DCM3 code available from http://www.cs.njit.edu/usman

• See CIPRES (http://www.phylo.org)

• Contact Usman Roshan ([email protected]) or Tandy Warnow ([email protected]) for more information

Future work

• DCM3–ML

• Biological discoveries from large dataset analysis

• Optimal subset size

Acknowledgements

This work was done in collaboration with• Tandy Warnow (UT Austin and Radcliffe)• Bernard Moret (UNM)• Tiffani Williams (UNM) Thanks to • Pablo Goloboff for support on TNT• Dave Swofford (FSU) for support on PAUP*• Robin Gutell (UT Austin) for providing large accurate

alignments• Doug Burger (UT Austin) and Steve Keckler (UT Austin)

for usage of the SCOUT Pentium and Mastadon Xeon clusters