Title
A Finetuned Language Model for Recommending cQA-QAs for Enriching Textbooks
Abstract
Textbooks play a vital role in any educational system, despite their clarity and information, students tend to use community question answers (cQA) forums to acquire more knowledge. Due to the high data volume, the quality of Question-Answers (QA) of cQA forums can differ greatly, so it takes additional effort to go through all possible QA pairs for a better insight. This paper proposes an "sentence-level text enrichment system" where the fine-tuned BERT (Bidirectional Encoder Representations from Transformers) summarizer understands the given text, picks out the important sentence, and then rearranged them to give the overall summary of the text document. For each important sentence, we recommend the relevant QA pairs from cQA to make the learning more effective. In this work, we fine-tuned the pre-trained BERT model to extract the relevant QA sets that are most relevant for enriching important sentences of the textbook. We notice that fine-tuning the BERT model significantly improves the performance for QA selection and find that it outperforms existing RNN-based models for such tasks. We also investigate the effectiveness of our fine-tuned BERTLarge model on three cQA datasets for the QA selection task and observed a maximum improvement of 19.72% compared to the previous models. Experiments have been carried out on NCERT (Grade IX and X) Textbooks from India and "Pattern Recognition and Machine Learning" Textbook. The extensive evaluation methods demonstrate that the proposed model offers more precise and relevant recommendations in comparison to the state-of-the-art methods.
Year
DOI
Venue
2021
10.1007/978-3-030-75765-6_34
ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2021, PT II
Keywords
DocType
Volume
cQA, BERT, QA, Precision, Text-enrichment
Conference
12713
ISSN
Citations 
PageRank 
0302-9743
0
0.34
References 
Authors
0
2
Name
Order
Citations
PageRank
Shobhan Kumar100.34
Arun Chauhan200.34