Self-regularised Minimum Latency Training for Streaming Transformer-based Speech Recognition | 0 | 0.34 | 2022 |
Multiple-hypothesis RNN-T Loss for Unsupervised Fine-tuning and Self-training of Neural Transducer | 0 | 0.34 | 2022 |
An Investigation Into The Multi-Channel Time Domain Speaker Extraction Network | 0 | 0.34 | 2021 |
HEAD-SYNCHRONOUS DECODING FOR TRANSFORMER-BASED STREAMING ASR | 0 | 0.34 | 2021 |
Transformer-Based Online Speech Recognition with Decoder-end Adaptive Computation Steps | 0 | 0.34 | 2021 |
Automated Attack and Defense Framework toward 5G Security | 1 | 0.35 | 2020 |
Framewise Supervised Training Towards End-to-End Speech Recognition Models: First Results | 2 | 0.39 | 2019 |