Reflection on the Workshop on Measuring Impact
On November 6th, CELT and PRIA ran a first event in a series of Methodology Workshops. The event, Measuring Impact: What counts as evidence aimed to equip participants from across the faculty with tools to design reliable and ethical impact studies to suit their context and their purpose, and to engage in informed discussions with all stakeholders in Higher Education, levelling their teaching and scholarship expertise to arbiter what counts as evidence.
You can access the slides and recording on the CELT Resource page. PRIA and CELT thank the speakers for their contributions and the participants for their questions and comments in the discussion that followed.
A participant, Xiaojia Lyu, wrote a reflection on the event. Thank you, Xiaojia.
Reflection on the Workshop on Measuring Impact: what counts as evidence in language education?
By Xiaojia Lyu
PhD Candidate, School of Education, University of Leeds
The workshop was extremely relevant to my research interests in ESP (English for Specific Purposes). With the growing emphasis on evidence-based teaching practices, this session highlighted the need for a nuanced understanding of impact measurement, especially in language education, and presented a variety of methods and perspectives that resonated with me.
One of the most valuable insights was the range of research methodologies showcased, including both qualitative and mixed methods. I found the example of using text-based online chat to improve grammatical accuracy among advanced learners of Spanish particularly innovative. This method leverages digital tools to engage students actively, reflecting a flexible and authentic approach to language learning. Seeing such varied methods demonstrated the possibilities for impact measurement beyond standardized testing, especially in areas like language proficiency and learner engagement.
The study on the impact of an EAP (English for Academic Purposes) module on STEM students' writing was especially relevant to my work. This research used a mixed-method approach, combining text analysis and talk-around-text interviews to examine how students applied EAP knowledge to discipline-specific tasks. The use of the “Academic Language Toolkit” as an analytical framework for assessing textual cohesion, logical meaning, and ideational meaning was inspiring. This approach allowed the researchers to track the nuanced ways students transferred language skills to their academic writing, which aligns closely with my interest in balancing language and content knowledge in ESP teaching.
The coding process in this EAP study was also impressive. Researchers used MAXQDA software and followed a three-stage coding process: thematic coding to capture what transferred, characterization of the EAP module’s features, and theoretical coding based on Legitimation Code Theory. This layered approach offered a structured way to understand students’ perceptions and experiences of language transfer, providing a model for my own research on ESP teaching.
Overall, this seminar has enriched my understanding of impact measurement in language education, especially in balancing quantitative rigor with qualitative depth. The methods and insights presented here will be highly applicable to my research, where I seek to explore the optimal integration of language and content knowledge in ESP. The session also offered a valuable networking opportunity, allowing me to connect with other educators and researchers who share similar interests. Engaging in discussions and exchanging ideas with peers and experienced scholars has broadened my perspective on impact studies and opened potential avenues for collaboration. This experience reinforced the importance of a reflective, context-aware approach to impact studies, which I am eager to apply in my own work.