Talking tech in education: a systematic review of chatbot evaluation, methods, communication modalities, and impact

Ons Al-Shamaileh, Omar Mubin, Nishara Nizamuddin

Research output: Contribution to journalArticlepeer-review

Abstract

This systematic review investigates how chatbots are evaluated and adopted in education; namely the methods used to assess their effectiveness, their communication modalities, and their benefits and challenges. Following the PRISMA reporting guidelines a comprehensive search was conducted across seven academic databases, covering studies published between 2014 and 2024, resulting in 124 empirical studies based on inclusion and exclusion criteria. Thematic analysis focused on evaluation criteria, advantages, and disadvantages, while data collection methods and communication modalities were analyzed descriptively. Eight key themes were identified in the assessment of educational chatbots, including technology acceptance, learning outcomes and usability. Surveys were the most common data collection method followed by interviews. Text modality dominated chatbot communication, while multimodal formats were rarely used. Reported benefits included 24/7 availability, personalized feedback, and improved motivation. However, challenges such as technical issues and academic integrity concerns were noted. Future research should explore longitudinal impacts and more inclusive assessment frameworks.

Original languageEnglish
Number of pages37
JournalInternational Journal of Human-Computer Interaction
DOIs
Publication statusE-pub ahead of print (In Press) - 2025

Keywords

  • Chatbot
  • education
  • evaluation
  • human-computer interaction
  • user experience

Fingerprint

Dive into the research topics of 'Talking tech in education: a systematic review of chatbot evaluation, methods, communication modalities, and impact'. Together they form a unique fingerprint.

Cite this