Targeted feedback generation for constructed-response questions
2021
Constructed-response questions (CRQs) are an important activity that can help foster generative processing and promote a deeper understanding of the core content for learners. However, providing feedback and grading free-form text responses is labor intensive. This paper proposes a novel solution for providing targeted feedback automatically in online learning environments without any model training process. We leverage human-defined model answers and grading rubrics to generate feedback for the learner’s answer. We apply state-of-the-art natural language processing (NLP) techniques, including text segmentation, pretrained language models (LMs), and contextualized text embeddings, to discover the misconceptions and provide feedback based on the misconceptions. We demonstrate the proposed solution with two CRQs embedded in an open-navigation online learning system that is focused on workforce learning. We measure the accuracy of the machine-generated feedback with human expert annotations. We use true positive rate (TPR) and false positive rate (FPR) as the statistical measure to validate the accuracy of the models. We report that semantic-based key phrase extraction outperforms statistics-based and graphbased approaches. We also report that pretrained LMs finetuned on similar tasks achieve the best performance compared to noncontextualized embedding approaches. To the best of our knowledge, this is the first work in evaluating the accuracy of misconception (interchangeable with knowledge gap, missing/inaccurate key points) analysis. We establish a baseline with 75% TPR and 22% FPR with semantic-based key phrase extraction and contextualized embedding approaches. We also demonstrate that semantic-based segmentation could potentially reduce human efforts in designing the comprehensive grading rubrics.
Research areas