3078-5677




![]()

Google Scholar
CrossRef
Index Copenius
OpenAIRE
Zenodo
J-Gate
Nare Hakobyan
V. Brusov State University, Armenia
Abstract
Despite their popularity, the design of reading comprehension tests in the form of multiplechoice questions (MCQs) is a time-consuming and arduous endeavor. For these reasons, quite often EFL teachers avoid the design of new tests for supplementary reading materials and the redesign of less useful ones. In this context, the use of Artificial Intelligence (AI) is valuable, more specifically an AI application Questgen that can generate reading comprehension tests within seconds, regardless of the text length or content. However, there is a shortage of data on the adequacy of such platforms and the impact they have on students’ performance. The aim of the current paper is to identify how AI performs as an MCQ generator to check reading comprehension compared to teacher-designed tests, by pointing out its strengths and weaknesses. This mixed-method research includes 10 reading comprehension tests, weekly introspective questionnaires, teacher interviews, and a focus group discussion as data collection tools. The research questions included in the study are: 1) How are AI-generated tests different from teacher-designed tests?, 2) How do students perform dependent on the tests?, and 3) What are teachers’ and students’ attitudes to AI-generated vs. teacher-designed tests?
Keywords
Artificial Intelligence (AI), reading comprehension tests, MCQs, AI-generated tests, attitudes