Abstract | ||
---|---|---|
We present a neural encoder-decoder model to convert images into presentational markup based on a scalable coarse-to-fine attention mechanism. Our method is evaluated in the context of image-to-LaTeX generation, and we introduce a new dataset of real-world rendered mathematical expressions paired with LaTeX markup. We show that unlike neural OCR techniques using CTC-based models, attention-based approaches can tackle this non-standard OCR task. Our approach outperforms classical mathematical OCR systems by a large margin on in-domain rendered data, and, with pretraining, also performs well on out-of-domain handwritten data. To reduce the inference complexity associated with the attention-based approaches, we introduce a new coarse-to-fine attention layer that selects a support region before applying attention. |
Year | Venue | Field |
---|---|---|
2017 | ICML | Expression (mathematics),Computer science,Inference,Artificial intelligence,Presentational and representational acting,Machine learning,Scalability,Markup language |
DocType | Citations | PageRank |
Conference | 12 | 0.62 |
References | Authors | |
24 | 4 |
Name | Order | Citations | PageRank |
---|---|---|---|
yuntian deng | 1 | 241 | 14.12 |
Anssi Kanervisto | 2 | 24 | 6.77 |
Jeffrey Ling | 3 | 12 | 0.62 |
Alexander M. Rush | 4 | 14 | 1.00 |