Bibliometrically and systematically analyzing automated writing evaluation for English learning
Abstract
Automated writing evaluation is highly discussed in artificial intelligence for English learning. It is necessary to explore the effect of automated writing evaluation on learning English as a second language. This study combined bibliometric analysis and systematic review to explore the use of automated writing evaluation for learning English as a second language. VOSviewer was used to identify the highly discussed topics, the top ten cited authors, organizations, countries, references, and sources in the studies on automated writing evaluation. Fifty-six peer-reviewed articles were selected according to the Preferred Reporting Items for Systematic Review and Meta-analysis Protocols. The analysis revealed that automated writing evaluation is helpful, but its effectiveness varies according to the types of feedback, and it cannot replace the role of human feedback yet. Teachers’ roles are significant in integrating automated writing evaluation into the classroom. Future research could focus on the specific ways to integrate automated writing evaluation into the classroom better.
References
Allen LK, Likens AD, McNamara DS (2019). Writing flexibility in argumentative essays: A multidimensional analysis. Reading and Writing: An Interdisciplinary Journal 32(6): 1607–1634. doi: 10.1007/s11145-018-9921-y
Bai L, Hu G (2017). In the face of fallible AWE feedback: How do students respond? Educational Psychology 37(1): 67–81. doi: 10.1080/01443410.2016.1223275
Bridgeman B, Ramineni C (2017). Design and evaluation of automated writing evaluation models: Relationships with writing in naturalistic settings. Assessing Writing 34: 62–71. doi: 10.1016/j.asw.2017.10.001
Butterfuss R, Roscoe RD, Allen LK, et al. (2022). Strategy uptake in writing pal: Adaptive feedback and instruction. Journal of Educational Computing Research 60(3): 696–721. doi: 10.1177/07356331211045304
Chen CFE, Cheng WYE (2008). Beyond the design of automated writing evaluation: Pedagogical practices and perceived learning effectiveness in EFL writing classes. Language Learning & Technology 12(2): 94–112.
Chen M, Cui Y (2022). The effects of AWE and peer feedback on cohesion and coherence in continuation writing. Journal of Second Language Writing 57: 100915. doi: 10.1016/j.jslw.2022.100915
Chen Z, Chen W, Jia J, Le H (2022). Exploring AWE-supported writing process: An activity theory perspective. Language Learning & Technology 26(2): 129–148. doi: 10125/73482
Diklil S, Bleyle S (2014). Automated Essay Scoring feedback for second language writers: How does it compare to instructor feedback? Assessing Writing 22: 1–17. doi: 10.1016/j.asw.2014.03.006
Du Y, Gao H (2022). Determinants affecting teachers’ adoption of AI-based applications in EFL context: An analysis of analytic hierarchy process. Education and Information Technologies 27(7): 9357–9384. doi: 10.1007/s10639-022-11001-y
Fan N (2023). Exploring the effects of automated written corrective feedback on EFL students’ writing quality: A mixed-methods study. Sage Open 13(2). doi: 10.1177/21582440231181296
Feng HH, Chukharev-Hudilainen E (2022). Genre-based AWE system for engineering graduate writing: Development and evaluation. Language Learning & Technology 26(2): 58–77. doi: 10125/73479
Foster S (2019). What barriers do students perceive to engagement with automated immediate formative feedback. Journal of Interactive Media in Education 2019(1): 15. doi: 10.5334/jime.516
Fu QK, Zou D, Xie H, Cheng G (2022). A review of AWE feedback: Types, learning outcomes, and implications. Computer Assisted Language Learning. doi: 10.1080/09588221.2022.2033787
Gao J (2021). Exploring the feedback quality of an automated writing evaluation system Pigai. International Journal of Emerging Technologies in Learning (iJET) 16(11): 322–330. doi: 10.3991/ijet.v16i11.19657
Godwin-Jones R (2022). Partnering with AI: Intelligent writing assistance and instructed language learning. Language Learning & Technology 26(2): 5–24. doi: 10125/73474
Grimes D, Warschauer M (2010). Utility in a fallible tool: A multi-site case study of automated writing evaluation. The Journal of Technology, Learning, and Assessment 8(6).
Guo Q, Feng R, Hua Y (2021). How effectively can EFL students use automated written corrective feedback (AWCF) in research writing? Computer Assisted Language Learning 35(9): 2312–2331. doi: 10.1080/09588221.2021.1879161
Han T, Sari E (2022). An investigation on the use of automated feedback in Turkish EFL students’ writing classes. Computer Assisted Language Learning. doi: 10.1080/09588221.2022.2067179
Han Y, Zhao S, Ng LL (2021). How technology tools impact writing performance, lexical complexity, and perceived self-regulated learning strategies in EFL academic writing: A comparative study. Frontiers in Psychology 12: 752793. doi: 10.3389/fpsyg.2021.752793
Huang X, Zou D, Cheng G, et al. (2023). Trends, research issues and applications of artificial intelligence in language education. Educational Technology & Society 26(1): 112–131. doi: 10.30191/ETS.202301_26(1).0009
Jiang L, Yu S, Wang C (2020). Second language writing instructors’ feedback practice in response to automated writing evaluation: A sociocultural perspective. System 93: 102302. doi: 10.1016/j.system.2020.102302
Koltovskaia S (2020). Student engagement with automated written corrective feedback (AWCF) provided by Grammarly: A multiple case study. Assessing Writing 44: 100450. doi: 10.1016/j.asw.2020.100450
Lai YH (2010). Which do students prefer to evaluate their essays: Peers or computer program. British Journal of Educational Technology 41(3): 432–454. doi: 10.1111/j.1467-8535.2009.00959.x
Lang F, Li S, Zhang S (2019). Research on reliability and validity of mobile networks-based automated writing evaluation. International Journal of Mobile Computing and Multimedia Communications 10(1): 18–31. doi: 10.4018/IJMCMC.2019010102
Liao HC (2016). Enhancing the grammatical accuracy of EFL writing by using an AWE-assisted process approach. System 62: 77–92. doi: 10.1016/j.system.2016.02.007
Li J (2022). English writing feedback based on online automatic evaluation in the era of big data. Mobile Information Systems 2022: 9884273. doi: 10.1155/2022/9884273
Li J, Link S, Hegelheimer V (2015). Rethinking the role of automated writing evaluation (AWE) feedback in ESL writing instruction. Journal of Second Language Writing 27: 1–18. doi: 10.1016/j.jslw.2014.10.004
Lim K, Song J, Park J (2022). Neural automated writing evaluation for Korean L2 writing. Natural Language Engineering 29(5): 1341–1363. doi: 10.1017/S1351324922000298
Link S, Mehrzad M, Rahimi M (2022). Impact of automated writing evaluation on teacher feedback, student revision, and writing improvement. Computer Assisted Language Learning 35(4): 605–634. doi: 10.1080/09588221.2020.1743323
Li R (2021). Modeling the continuance intention to use automated writing evaluation among Chinese EFL learners. Sage Open 11(4). doi: 10.1177/21582440211060782
Liu S, Yu G (2022). L2 learners’ engagement with automated feedback: An eye-tracking study. Language Learning & Technology 26(2): 78–105. doi: 10125/73480
Li Z, Feng HH, Saricaoglu A (2017). The short-term and long-term effects of AWE feedback on ESL students’ development of grammatical accuracy. Calico Journal 34(3): 355–375. doi: 10.1558/cj.26382
Li Z (2021). Teachers in automated writing evaluation (AWE) system-supported ESL writing classes: Perception, implementation, and influence. System 99: 102505. doi: 10.1016/j.system.2021.102505
Li Z, Link S, Ma H, et al. (2014). The role of automated writing evaluation holistic scores in the ESL classroom. System 44: 66–78. doi: 10.1016/j.system.2014.02.007
Lu X (2019). An empirical study on the artificial intelligence writing evaluation system in China CET. Big Data 7(2): 121–129. doi: 10.1089/big.2018.0151
Mehrabi-Yazdi O (2018). Short communication on the missing dialogic aspect of an automated writing evaluation system in written feedback research. Journal of Second Language Writing 41: 92–97. doi: 10.1016/j.jslw.2018.05.004
Miranty D, Widiati U (2021). An automated writing evaluation (AWE) in higher education: Indonesian EFL students’ perceptions about Grammarly use across student cohorts. Pegem Journal of Education and Instruction 11(4): 126–137. doi: 10.47750/pegegog.11.04.12
Mohsen MA (2022). Computer-mediated corrective feedback to improve L2 writing skills: A meta-analysis. Journal of Educational Computing Research 60(5): 1253–1276. doi: 10.1177/07356331211064066
Nazari N, Shabbir MS, Setiawan R (2021). Application of Artificial Intelligence powered digital writing assistant in higher education: Randomized controlled trial. Heliyon 7(5): E07014. doi: 10.1016/j.heliyon.2021.e07014
Ngo TTN, Chen HHJ, Lai KKW (2022). The effectiveness of automated writing evaluation in EFL/ESL writing: A three-level meta-analysis. Interactive Learning Environments. doi: 10.1080/10494820.2022.2096642
Page MJ, McKenzie JE, Bossuyt PM, et al. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 372: n71. doi: 10.1136/bmj.n71
Palermo C, Thomson MM (2018). Teacher implementation of Self-Regulated Strategy Development with an automated writing evaluation system: Effects on the argumentative writing performance of middle school students. Contemporary Educational Psychology 54: 255–270. doi: 10.1016/j.cedpsych.2018.07.002
Palermo C, Wilson J (2020). Implementing automated writing evaluation in different instructional contexts: A mixed-methods study. Journal of Writing Research 12(1): 63–108. doi: 10.17239/jowr-2020.12.01.04
Parra GL, Calero SX (2019). Automated writing evaluation tools in the improvement of the writing skill. International Journal of Instruction 12(2): 209–226. doi: 10.29333/iji.2019.12214a
Petchprasert A (2021). Utilizing an automated tool analysis to evaluate EFL students’ writing performances. Asian-Pacific Journal of Second and Foreign Language Education 6: 1. doi: 10.1186/s40862-020-00107-w
Potter A, Wilson J (2021). Statewide implementation of automated writing evaluation: Analyzing usage and associations with state test performance in grades 4–11. Educational Technology Research and Development 69(3): 1557–1578. doi: 10.1007/s11423-021-10004-9
Qian L, Yang Y, Zhao Y (2021). Syntactic complexity revisited: Sensitivity of China’s AES-generated scores to syntactic measures, effects of discourse-mode and topic. Reading and Writing 34: 681–704. doi: 10.1007/s11145-020-10087-5
Ranalli J (2018). Automated written corrective feedback: How well can students make use of it? Computer Assisted Language Learning 31(7): 653–674. doi: 10.1080/09588221.2018.1428994
Ranalli J (2021). L2 student engagement with automated feedback on writing: Potential for learning and issues of trust. Journal of Second Language Writing 52: 100816. doi: 10.1016/j.jslw.2021.100816
Reynolds BL, Kao CW, Huang Y (2021). Investigating the effects of perceived feedback source on second language writing performance: A quasi-experimental study. Asia-Pacific Education Researcher 30(6): 585–595. doi: 10.1007/s40299-021-00597-3
Roscoe RD, Wilson J, Johnson AC, Mayra CR (2017). Presentation, expectations, and experience: Sources of student perceptions of automated writing evaluation. Computers in Human Behavior 70: 207–221. doi: 10.1016/j.chb.2016.12.076
Sari E, Han T (2022). Using generalizability theory to investigate the variability and reliability of EFL composition scores by human raters and e-rater. Porta Linguarum: An International Journal of Foreign Language and Learning 38: 27–45. doi: 10.30827/portalin.vi38.18056
Saricaoglu A, Bilki Z (2021). Voluntary use of automated writing evaluation by content course students. ReCALL 33(3): 265–277. doi: 10.1017/S0958344021000021
Shamseer L, Moher D, Clarke M, et al. (2015). Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: Elaboration and explanation. British Medical Journal 349: g7647. doi: 10.1136/bmj.g7647
Shi H, Vahid A (2022). A systematic review of automated writing evaluation systems. Education and Information Technologies 28: 771–795. doi: 10.1007/s10639-022-11200-7
Stevenson M, Phakiti A (2014). The effects of computer-generated feedback on the quality of writing. Assessing Writing 19: 51–65. doi: 10.1016/j.asw.2013.11.007
van Eck NJ, Waltman L (2017). Citation-based clustering of publications using CitNetExplorer and VOSviewer. Scientometrics 111(2): 1053–1070. doi: 10.1007/s11192-017-2300-7
Wang Q (2022). The use of semantic similarity tools in automated content scoring of fact-based essays written by EFL learners. Education and Information Technologies 27(9): 13021–13049. doi: 10.1007/s10639-022-11179-1
Wang YJ, Shang HF, Briody P (2013). Exploring the impact of using automated writing evaluation in English as a foreign language university students’ writing. Computer Assisted Language Learning 26(3): 234–257. doi: 10.1080/09588221.2012.655300
Warschauer M, Grimes D (2008). Automated writing assessment in the classroom. Pedagogies: An International Journal 3(1): 22–36. doi: 10.1080/15544800701771580
Warschauer M, Ware P (2006). Automated writing evaluation: Defining the classroom research agenda. Language Teaching Research 10(2): 157–180. doi: 10.1191/1362168806lr190oa
Wilken JL (2018). Perceptions of L1 glossed feedback in automated writing evaluation: A case study. Calico Journal 35(1): 30–48. doi: 10.1558/cj.26383
Wilson J, Ahrendt C, Fudge EA, et al. (2021). Elementary teachers’ perceptions of automated feedback and automated scoring: Transforming the teaching and learning of writing using automated writing evaluation. Computers & Education 168: 104208. doi: 10.1016/j.compedu.2021.104208
Wilson J, Czik A (2016). Automated essay evaluation software in English Language Arts classrooms: Effects on teacher feedback, student motivation, and writing quality. Computers & Education 100: 94–109. doi: 10.1016/j.compedu.2016.05.004
Woodworth J, Barkaoui K (2020). Perspectives on using automated writing evaluation systems to provide written corrective feedback in the ESL classroom. TESL Canada Journal 37(2): 234–247. doi: 10.18806/tesl.v37i2.1340
Xu J, Zhang S (2022). Understanding AWE feedback and English writing of learners with different proficiency levels in an EFL classroom: A sociocultural perspective. The Asia-Pacific Education Researcher 31(4): 357–367. doi: 10.1007/s40299-021-00577-7
Yu Z (2020). Extending the learning technology acceptance model of WeChat by adding new psychological constructs. Journal of Educational Computing Research 58(6): 1121–1143. doi: 10.1177/0735633120923772
Yu Z (2022). A meta-analysis and bibliographic review of the effect of nine factors on online learning outcomes across the world. Education and Information Technologies 27(2): 2457–2482. doi: 10.1007/s10639-021-10720-y
Yu Z, Zhu Y, Yang Z, Chen W (2019). Student satisfaction, learning outcomes, and cognitive loads with a mobile learning platform. Computer Assisted Language Learning 32(4): 323–341. doi: 10.1080/09588221.2018.1517093
Zhai N, Ma X (2023). The effectiveness of automated writing evaluation on writing quality: A meta-analysis. Journal of Educational Computing Research 61(4): 875–900. doi: 10.1177/07356331221127300
Zhang J, Zhang LJ (2022). The effect of feedback on metacognitive strategy use in EFL writing. Computer Assisted Language Learning. doi: 10.1080/09588221.2022.2069822
Zhang Z (2017). Student engagement with computer-generated feedback: A case study. ELT Journal 71(3): 317–328. doi: 10.1093/elt/ccw089
Zhang Z (2020). Engaging with automated writing evaluation (AWE) feedback on L2 writing: Student perceptions and revisions. Assessing Writing 43: 100439. doi: 10.1016/j.asw.2019.100439
Zhang Z, Hyland K (2018). Student engagement with teacher and automated feedback on L2 writing. Assessing Writing 36: 90–102. doi: 10.1016/j.asw.2018.02.004
Copyright (c) 2023 author(s)
This work is licensed under a Creative Commons Attribution 4.0 International License.
The author(s) warrant that permission to publish the article has not been previously assigned elsewhere.
Author(s) shall retain the copyright of their work and grant the Journal/Publisher right for the first publication with the work simultaneously licensed under:
OA - Creative Commons Attribution License (CC BY 4.0). This license allows for the copying, distribution and transmission of the work, provided the correct attribution of the original creator is stated. Adaptation and remixing are also permitted.
This broad license intends to facilitate free access to, as well as the unrestricted reuse of, original works of all types.