Abstract | ||
---|---|---|
In order to avoid the reference bias introduced by mapping reads to a reference genome, bioinformaticians are investigating reference-free methods for analyzing sequenced genomes. With large projects sequencing thousands of individuals, this raises the need for tools capable of handling terabases of sequence data. A key method is the Burrows-Wheeler transform (BWT), which is widely used for compressing and indexing reads. We propose a practical algorithm for building the BWT of a large read collection by merging the BWTs of subcollections. With our 2.4 Tbp datasets, the algorithm can merge 600 Gbp/day on a single system, using 30 gigabytes of memory overhead on top of the run-length encoded BWTs. |
Year | DOI | Venue |
---|---|---|
2015 | 10.1109/DCC.2016.17 | 2016 Data Compression Conference (DCC) |
Keywords | Field | DocType |
Burrows-Wheeler transform,reference genome,bioinformatics,sequenced genome analysis,sequence data terabase handling,BWT,indexing | Data mining,Burrows–Wheeler transform,Computer science,Gigabyte,Search engine indexing,Theoretical computer science,Data sequences,Merge (version control),Reference genome | Journal |
Volume | ISSN | ISBN |
abs/1511.00898 | 1068-0314 | 978-1-5090-1854-3 |
Citations | PageRank | References |
0 | 0.34 | 13 |
Authors | ||
1 |
Name | Order | Citations | PageRank |
---|---|---|---|
Jouni Sirén | 1 | 222 | 14.85 |