This paper investigates the issue of generating multiple questions with respect to a given context paragraph. Existing designs of question generation (QG) model take no notice of intra-group similarity and type diversity for forming a question group. These attributes are critical for employing QG techniques in educational applications. This paper proposes a two-stage framework by combining neural language models and genetic algorithm for the question group generation task. Our model design significantly improves the performance of the compared baselines, as indicated by the experiments based on benchmark datasets. Human evaluation are also conducted to validate the design and understand the limitations.
This dataset is a subset of RACE, which contains three types(Factoid, Cloze and Summarization) of questions.