7
Produced by Cambridge Healthtech Media Group By John Russell, Contributing Editor, Bio•IT World Exploring EMC Isilon scale-out storage solutions Hadoop’s Rise in Life Sciences

Hadoop’s Rise in Life Sciences - Bio-IT World...Hadoop’s Rise in Life Sciences | 3 It turns out that Hadoop – a fault-tolerant, share-nothing architecture in which tasks must

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Hadoop’s Rise in Life Sciences - Bio-IT World...Hadoop’s Rise in Life Sciences | 3 It turns out that Hadoop – a fault-tolerant, share-nothing architecture in which tasks must

Produced by Cambridge Healthtech Media Group

By John Russell, Contributing Editor, Bio•IT World

Exploring EMC Isilon scale-out storage solutions

Hadoop’s Risein Life Sciences

Page 2: Hadoop’s Rise in Life Sciences - Bio-IT World...Hadoop’s Rise in Life Sciences | 3 It turns out that Hadoop – a fault-tolerant, share-nothing architecture in which tasks must

By now the ‘Big Data’ challenge is familiar to the entire life sciences community. Modern high-throughput experimental technologies generate vast data sets that can only be tackled with high performance computing (HPC). Genomics, of course, is the leading example. At the end of 2011, global annual sequencing capacity was estimated at 13 quadrillion bases and growing rapidly1. It’s worth noting a single base pair typically represents about 100 bytes of data (raw, analyzed, and interpreted). The need to manage and analyze these massive data sets, not just in life sciences but throughout all of science and industry, has spurred many new approaches to HPC infrastructure and led to many important IT advances, particularly in distributed computing. While there isn’t a single right answer, one approach – the Hadoop storage and compute framework – is emerging as a compelling contender for use in life sciences to cope with the deluge of data.

Created in 2004 by Doug Cutting (who famously named it after his son’s stuffed elephant) and elevated to a top-level Apache Foundation project in 2008, Hadoop is intended to run large-scale distributed data analysis on commodity clusters. Cutting was initially inspired by a paper2 from Google Labs describing Google’s BigTable infrastructure and MapReduce application layers. (For a detailed perspective see Ronald Taylor’s, An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics.3)

Broadly, Hadoop uses a file system (Hadoop Distributed File System (HDFS) and framework software (MapReduce) to break extremely large data sets into chunks, to distribute/store (Map) those chunks to nodes in a cluster, and to gather (Reduce) results following computation. Hadoop’s distinguishing feature is it automatically stores the chunks of data on the same nodes on which they will be processed. This strategy of co-locating of data and processing power (proximity computing) significantly accelerates performance and in April 2008 a Hadoop program, running on 910-node cluster, broke a world record, sorting a terabyte of data in less than 3.5 minutes.4

Hadoop’s Rise in Life Sciences | 2

1 DNA Sequencing Caught in Deluge of Data”, New York Times, Nov. 30, 2011, http://www.nytimes.com/2011/12/01/business/dna-sequencing-caught-in-deluge-of-data.html?_r=1&ref=science

2 OSDI’04: Sixth Symposium on Operating System Design and Implementation, San Francisco, CA, December, 2004, http://research.google.com/archive/mapreduce.html

3 An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3040523/

4 “Hadoop wins Terabyte sort benchmark”, Apr 2008, Apr. 2009, http://sortbenchmark.org/YahooHadoop.pdf last accessed Dec 2011

The Hadoop Distributed File System (HDFS) and compute framework (MapReduce) enable Hadoop to break extremely large data sets into chunks, to distribute/store (Map) those chunks to nodes in a cluster, and to gather (Reduce) results following computation.

Page 3: Hadoop’s Rise in Life Sciences - Bio-IT World...Hadoop’s Rise in Life Sciences | 3 It turns out that Hadoop – a fault-tolerant, share-nothing architecture in which tasks must

Part of the improved performance stems from MapReduce’s key:value programming model which speeds up and scales up parallelized “job” execution better than many alternatives such as the GridEngine architecture for High Performance Computing (HPC). (One of the earliest use-cases of the Sun GridEngine5 HPC was the DNA sequence comparison BLAST search.) The MapReduce layer is a batch query processor with dynamic data schema and linear scaling for unstructured or semi-structured data. Its data is not “normalized” (decomposition of data into smaller structured relationships). Therefore higher level interpreted programming languages like Ruby and Python and a compiled language like C++ provide easier access to MapReduce to represent the program as MapReduce “jobs”.

Standard Hadoop interfaces are available via Java, C, FUSE and WebDAV. The Hadoop R (statistical language) interface, RHIPE, is also popular in the life sciences community.

It turns out that Hadoop – a fault-tolerant, share-nothing architecture in which tasks must have no dependence on each other – is an excellent choice for many life sciences applications. This is largely because so much of life sciences data is semi- or unstructured file-based data and ideally suited for ‘embarrassingly parallel’ computation. Moreover, the use of commodity hardware (e.g. Linux cluster) keeps cost down, and little or no hardware modification is required6.

Not surprisingly life sciences organizations were among Hadoop’s earliest adopters. The first large-scale MapReduce project was initiated by the Broad Institute (in 2008) and resulted in the comprehensive Genome Analysis Tool Kit (GATK)7. The Hadoop “CrossBow” project from Johns Hopkins University came soon after8.

5 Altschul SF, et al, “Basic local alignment search tool”. J Mol Biol 215 (3): 403–410, October 1990.

6 An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics, http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3040523/

7 McKenna A, et al, “The Genome Analysis Toolkit: A MapReduce framework for analyzing next-generation DNA sequencing data”, Genome Research, 20:1297–1303, July 2010.

8 http://bowtie-bio.sourceforge.net/crossbow/index.shtml

Hadoop’s Rise in Life Sciences | 3

It turns out that Hadoop – a fault-tolerant, share-nothing architecture in which tasks must have no dependence on each other – is an excellent choice for many life sciences applications.

Page 4: Hadoop’s Rise in Life Sciences - Bio-IT World...Hadoop’s Rise in Life Sciences | 3 It turns out that Hadoop – a fault-tolerant, share-nothing architecture in which tasks must

Here are a few current Hadoop-based bioinformatics applications9: • Crossbow. Whole genome resequencing analysis; SNP

genotyping from short reads. • Contrail. De novo assembly from short sequencing reads. • Myrna. Ultrafast short read alignment and differential gene

expression from large RNA-seq data sets. • PeakRanger. Cloud-enabled peak caller for ChIP-seq data. • Quake. Quality-aware detection and sequencing error

correction tool. • BlastReduce. High-performance short read mapping. • CloudBLAST. Hadoop implementation of NCBI’s Blast. • MrsRF. Algorithm for analyzing large evolutionary trees.

(For a more detailed example of Hadoop in operation see sidebar, Genomics Example: Calling SNPs with Crossbow.)

Hadoop’s Rise in Life Sciences | 4

Genomics Example: Calling SNPs with CrossBowNext Generation Sequencers (NGS) like Illumina Hiseq can produce data in the order of 200 billion base pairs (200 Gbp) in a single one-week run for a 60x human genome coverage, which means that each base was present on an average of 60 reads. The larger the coverage, the more statistically significant is the result. Sequence reads are much shorter than traditional “Sanger” sequencing. This data requires specialized software algorithms called “short read aligners”.

CrossBow is a combination of several algorithms that provide SNP calling and short read alignment, which are common tasks in NGS. Figure 1 alongside explains the steps necessary to process genome data to look for SNPs. The Map-Sort-Reduce process is ideally suited for a Hadoop framework. The cluster as shown is a traditional N-node Hadoop cluster. All of the Hadoop features like HDFS, program management and fault tolerance are available.

The Map step is the short read alignment algorithm, called BoWTie (named after the Burrows Wheeler Transform, BWT). Multiple instances of BoWTie are run in parallel in Hadoop. The input tuples (an ordered list of elements) are the sequence reads and the output tuples are the alignments of the short reads.

The Sort step apportions the alignments according to a primary key (the genome partition) and sorts based on a secondary key (which is the offset for that partition). The data here are the sorted alignments.

The Reduce step calls SNPs for each reference genome partition. Many parallel instances of the algorithm SOAPsnp (Short Oligonucleotide Analysis Package for SNP) run in the cluster. Input tuples are sorted alignments for a partition and the output tuples are SNP calls. Results are stored via HDFS, and then archived in SOAPsnp format.

9 Got Hadoop?, Sept. 2011, Genome Technology, http://www.genomeweb.com/informatics/got-hadoop

Page 5: Hadoop’s Rise in Life Sciences - Bio-IT World...Hadoop’s Rise in Life Sciences | 3 It turns out that Hadoop – a fault-tolerant, share-nothing architecture in which tasks must

After several years of steady development in academic environments, Hadoop is now poised for rapid commercialization and broader uptake in biopharma and healthcare. Early adoption has been strongest among next generation sequencing (NGS) centers where NGS workflows can generate 2 TeraBytes (TB) of data per run per week per sequencer – that’s not including the raw images. For these organizations, the need for scale-out storage that integrates with HPC is a line item requirement.

EMC ® Isilon ®, long a leader in scale-out NAS storage solutions, understands these challenges and has provided the scale-out storage for nearly all the workflows for all the DNA sequencer instrument manufacturers in the market today at more than 150 customers. Since 2008, the EMC Isilon OneFS ® storage platform has an overall installed base of more than 65 PetaBytes (PB). Recently, EMC introduced the industry’s first scale-out NAS system with native Hadoop support (via HDFS).

The EMC Isilon OneFS file system now provides for connectivity to the Hadoop Distributed File System (HDFS) just like any other shared file system protocol: NFS, CIFS or SMB10. This allows for the data co-location of the storage with its compute nodes using the standard higher-level Java application programming interface (API) to build MapReduce “jobs”. EMC has gone one step further by combining its OneFS-based NAS solution with EMC Greenplum ® HD, a powerful analytics platform, to create a Hadoop appliance. Together, the two offerings relieve users of the burden of cobbling together various open source Hadoop components, which sometimes proves problematic. “Hadoop meets all the tenets of Jim Gray’s Laws of Data Engineering11 which have not changed in 15 years,” says Sanjay Joshi, CTO, Life Sciences, EMC Isilon Storage Division. Those tenets include: scientific computing is very data intensive, with no real limits; the solution is a scale-out architecture with distributed data access; and bring computation to the data, rather than data to the computations.”

10 Hadoop on EMC Isilon Scale Out NAS: EMC White Paper, Part Number h10528

11 From Jim Gray, “Scalable Computing”, presentation at Nortel: Microsoft Research, April 1999

Hadoop’s Rise in Life Sciences | 5

“Hadoop meets all the tenets of Jim Gray’s Laws of Data Engineering which have not changed in 15 years.”Sanjay Joshi CTO, Life Sciences, EMC Isilon Storage Division

Page 6: Hadoop’s Rise in Life Sciences - Bio-IT World...Hadoop’s Rise in Life Sciences | 3 It turns out that Hadoop – a fault-tolerant, share-nothing architecture in which tasks must

“Isilon built the industry’s first Scale Out storage architecture. Now with its native and enterprise-ready HDFS protocol via OneFS and GreenPlum HD, EMC brings simplicity to Big Data in Science.” says Joshi.

EMC Isilon OneFS combines the three layers of traditional storage architectures—the file system, volume manager, and RAID—into one unified software layer, creating a single intelligent distributed file system that runs on one storage cluster. Important advantages of OneFS for Hadoop are:

• Scalable: Linear scale with increasing capacity – from 18TB to 16PB in a single filesystem and a single global namespace. Scale out as needs grow, independent of the compute layer.

• Predictable: Dynamic content balancing is performed as nodes are added, upgraded or capacity changes. No added management time is required since this process is simple.

• Available: OneFS protects your data from power loss, node or disk failures, loss of quorum and storage rebuild by distributing data, metadata and parity across all nodes. It also eliminates the single point of failure of a Hadoop “Name Node”. Therefore OneFS is “self healing”.

• Efficient: Compared to the average 50% efficiency of traditional RAID systems, OneFS provides over 80% efficiency, independent of CPU compute or cache. This efficiency is achieved by ‘tier’ing the process into three types as shown in the figure alongside and by the pools within these node types. This efficiency extends to the reduction from a 3x copy that Hadoop requires to the >80% efficient 1x storage via EMC Isilon’s HDFS protocol.

• Enterprise-ready. Administration of the storage clusters is via an intuitive Web based UI. Connectivity to your process is through standard file protocols: CIFS, SMB, NFS, FTP/HTTP, iSCSI and HDFS. Standardized authentication and access control is available at scale: AD, LDAP and NIS.

Hadoop’s Rise in Life Sciences | 6

Storage tiers without fears based on performance reside in one global namespace, connected via a dedicated backend network.

Page 7: Hadoop’s Rise in Life Sciences - Bio-IT World...Hadoop’s Rise in Life Sciences | 3 It turns out that Hadoop – a fault-tolerant, share-nothing architecture in which tasks must

CONCLUSIONWhat began as an internal project at Google in 2004 has now matured into a scalable framework for two computing paradigms that are particularly suited for the life sciences: parallelization and distribution. Indeed, the post-processing streaming data patterns for text strings, clustering and sorting – the core process patterns in the life sciences – are ideal workflows for Hadoop.

Case-in-point: The CrossBow example cited earlier aligned Illumina NGS reads for SNP calling over a ‘35x’ coverage of the human genome in under 3 hours using a 40-node Hadoop cluster; an order of magnitude better than traditional HPC technology for parallel processes.

The EMC Isilon OneFS distributed file system handles the Hadoop distributed file system, HDFS, just like any other shared file system, and provides a shield for the single point of failure in Hadoop: the name node. The Hybrid Cloud model (source data mirror) with Hadoop as a Service (HaaS) is the current state-of-the-art. For more information visit EMC Isilon at http://www.emc.com/isilon.

Summary of Hadoop Attributes:Overview• Write Once Read Many times (WORM) • Co-locates data with compute, uses higher level architecture with Java API• HDFS is a distributed file system that runs on large clusters

Advantages• Uses MapReduce framework – a batch query processor, scales linearly• EMC Isilon OneFS implements HDFS and eliminates the single point of failure, the “name node”• Standard programming language development: Java, Ruby, Python, C++ create MapReduce jobs. FUSE and

WebDAV interfaces provide architectural flexibility

Challenges• HDFS block size is 128 MB (can be increased), therefore large numbers of small files (<8KB) reduce its

performance: use Hadoop Archive (HAR)• Data coherency and latency remain issues for large scale implementations• Not suited for low-latency, “in process” use-cases like real-time, spectral or video analysis• Data transfer between Genome sequencing data sources to the Hadoop clusters in the Cloud remains an issue,

the current business model is mirroring the data between source and Cloud and then utilizing Hadoop as a Service model on the mirrored data.

Hadoop’s Rise in Life Sciences | 7