Title
Analyzing large scale genomic data on the cloud with Sparkhit.
Abstract
Motivation: The increasing amount of next-generation sequencing data poses a fundamental challenge on large scale genomic analytics. Existing tools use different distributed computational platforms to scale-out bioinformatics workloads. However, the scalability of these tools is not efficient. Moreover, they have heavy run time overheads when pre-processing large amounts of data. To address these limitations, we have developed Sparkhit: a distributed bioinformatics framework built on top of the Apache Spark platform. Results: Sparkhit integrates a variety of analytical methods. It is implemented in the Spark extended MapReduce model. It runs 92-157 times faster than MetaSpark on metagenomic fragment recruitment and 18-32 times faster than Crossbow on data pre-processing. We analyzed 100 terabytes of data across four genomic projects in the cloud in 21 h, which includes the run times of cluster deployment and data downloading. Furthermore, our application on the entire Human Microbiome Project shotgun sequencing data was completed in 2 h, presenting an approach to easily associate large amounts of public datasets with reference data.
Year
DOI
Venue
2018
10.1093/bioinformatics/btx808
BIOINFORMATICS
Field
DocType
Volume
Reference data (financial markets),Data mining,Shotgun sequencing,Spark (mathematics),Computer science,Terabyte,Upload,Analytics,Database,Scalability,Cloud computing
Journal
34
Issue
ISSN
Citations 
9
1367-4803
4
PageRank 
References 
Authors
0.55
10
3
Name
Order
Citations
PageRank
Liren Huang140.55
Jan Krüger2768.32
Alexander Sczyrba31168.86