摘要:Many electronic feedback systems have been proposed for writing support. However, most of
these systems only aim at supporting writing to communicate instead of writing to learn, as in the
case of literature review writing. Trigger questions are potentially forms of support for writing to
learn, but current automatic question generation approaches focus on factual question generation
for reading comprehension or vocabulary assessment. This article presents a novel Automatic
Question Generation (AQG) system, called G-Asks, which generates specific trigger questions as
a form of support for students' learning through writing. We conducted a large-scale case study,
including 24 human supervisors and 33 research students, in an Engineering Research Method
course and compared questions generated by G-Asks with human generated questions. The results
indicate that G-Asks can generate questions as useful as human supervisors (‘useful’ is one of five
question quality measures) while significantly outperforming Human Peer and Generic Questions
in most quality measures after filtering out questions with grammatical and semantic errors.
Furthermore, we identified the most frequent question types, derived from the human supervisors’
questions and discussed how the human supervisors generate such questions from the source text.
关键词:Automatic Question Generation; Natural Language Processing; Academic
Writing Support