Abstract | ||
---|---|---|
This paper presents the most recent developments of thewebASR service (www.webasr.org), the world’s first web–based fully functioning automatic speech recognition platformfor scientific use. Initially released in 2008, the functionalitiesof webASR have recently been expanded with 3 main goals inmind: Facilitate access through a RESTful architecture, that allowsfor easy use through either the web interface or an API; allowthe use of input metadata when available by the user to improvesystem performance; and increase the coverage of availablesystems beyond speech recognition. Several new systemsfor transcription, diarisation, lightly supervised alignment andtranslation are currently available through webASR. The resultsin a series of well–known benchmarks (RT’09, IWSLT’12 andMGB’15 evaluations) show how these webASR systems providesstate–of–the–art performances across these tasks |
Year | Venue | Field |
---|---|---|
2016 | INTERSPEECH | Metadata,Architecture,Computer science,Speech recognition,Multimedia,Speech technology,Cloud computing |
DocType | Citations | PageRank |
Conference | 0 | 0.34 |
References | Authors | |
0 | 9 |
Name | Order | Citations | PageRank |
---|---|---|---|
Thomas Hain | 1 | 171 | 28.29 |
Jeremy Christian | 2 | 0 | 0.34 |
Oscar Saz | 3 | 142 | 16.30 |
Salil Deena | 4 | 27 | 3.61 |
Madina Hasan | 5 | 13 | 5.35 |
Raymond W. M. Ng | 6 | 340 | 21.61 |
Rosanna Milner | 7 | 11 | 2.59 |
Mortaza Doulaty | 8 | 33 | 5.35 |
Yulan Liu | 9 | 0 | 0.34 |