top of page
1

Guest Speakers

2

Plenary panel discussion 1

Antony Kunnan

Moderator

photo-Anthony Kunnan-Panel Discussion Mo

- Principal Assessment Scientist at Duolingo, Inc.

- Senior Research Fellow at Carnegie Mellon Univeristy

Antony John Kunnan is Principal Assessment Scientist at Duolingo, Inc. and Senior Research Fellow at Carnegie Mellon University. He has held university professorships in Los Angeles, Hong Kong, Singapore, and Macau and a Fulbright professorship at Tunghai University, Taiwan. His research interests are in the areas of fairness, validation, and ethics. He is the author and editor of over 90 publications; his most recently authored book is Evaluating Language Assessments (Routledge, 2018) and a four-volume edited collection is titled The Companion to Language Assessment (Wiley, 2014). He has given 120 invited talks and workshops in 35 countries. He was the President of the International Language Testing Association and Founding President of the Asian Association for Language Assessment. He is also the founding editor of Language Assessment Quarterly.

Plenary panel discussion 2
The Essence of Learning-Oriented Language Assessment: Focusing on the Role of Assessment Taksks

시롱시롱.png

Mikyung Kim Wolf

managing principal research scientist at Educational Testing Service.

Mikyung Kim Wolf is a managing principal research scientist at Educational Testing Service. Her research areas span technology-enhanced language assessments, formative assessment, and validity issues in assessing K-12 English language learners in the U.S. and in global contexts. Mikyung has published widely including two edited books, English Language Proficiency Assessments for Young Learners and Assessing English Language Proficiency in U.S. K-12 Schools. She is a chair of the International Language Testing Association (ILTA)’s Language Assessments for Young Learners SIG. She also serves as an associate editor of Language Assessment Quarterly. Her recent co-authored article, Investigating the Benefits of Scaffolding in Assessments of Young English Learners: A Case for Scaffolded Retell Tasks won the 2019 ILTA Best Article Award. Mikyung received a B.A. in English language and literature and M.A. in psycholinguistics from Korea University, and a Ph.D. in applied linguistics with specialization in language assessment from UCLA. 

Plenary panel discussion 3
Aligning large-scale and classroom assessment 

Nick Saville_Headshot_2019.jpeg

Nick Saville

- Director of Research & Thought Leadership at Cambridge Assessment English (University of Cambridge)

- Elected Secretary-General of the Association of Language Testers in Europe(ALTE)

Dr Nick Saville is Director of Research & Thought Leadership at Cambridge Assessment English (University of Cambridge), and is the elected Secretary-General of the Association of Language Testers in Europe (ALTE). 

He regularly presents at international conferences and publishes on issues related to language assessment. His research interests include assessment and learning in the digital age; the use of ethical AI; language policy and multilingualism; the CEFR; and Learning Oriented Assessment (LOA). He co-authored a volume on LOA with Dr Neil Jones (SiLT 45, CUP) and recently wrote a chapter on LOA as a way of understanding and using all types of assessment to support language learning (Learning-Oriented Language Assessment, Routledge). 

Nick was a founding associate editor of the journal Language Assessment Quarterly and is currently joint editor of the Studies in Language Testing (SiLT, CUP) and editor of the English Profile Studies series (EPS, CUP). He sits on several Cambridge of University Boards, including: the Interdisciplinary Research Centre for Language Sciences; the Institute for Automated Language Teaching and Assessment; and English Language iTutoring (ELiT), providing AI-informed automated systems. He is on the Board of Trustees for The International Research Foundation (TIRF) and was a member of the Advisory Council for the Institute for Ethical AI in Education whose final report was published in March 2021. 

Abstract

  The alignment of learning and assessment goals is required to ensure that what is taught is, indeed, what is tested, and that both serve purposes deemed to be of value to society.  Alignment is not a simple notion, but is better understood as a ‘complex, non-linear, interacting system’ (Daugherty et al 2008:253), within an Ecosystem of Learning.  I will explore different interpretations of goals and alignment, and their practical implications.

Plenary panel discussion 4
Making Assessment Work in the Classroom

David Booth

The Director of Test Development for 
Pearson English Assessment.

The Director of Test Development for Pearson English Assessment. He is responsible for the development of specific Pearson tests ensuring that all test development processes are executed to the highest standards and that test material is of the highest quality and fit for purpose. David works closely with other staff at Pearson to develop assessment and learning solutions to meet specific customer requirements.
David’s main expertise is in the development and revision of 
tests and he has given presentations at major conferences on 
this theme. David has also contributed articles on specific test 
development projects in published research notes. David’s 
other interests include corpus linguistics and assessment for specific purposes. Before joining Pearson, David worked for 10 years at Cambridge ESOL, a part of Cambridge Assessment. David also has extensive academic management, teaching and teacher training experience working for the British Council in South Korea, Hong Kong and Malaysia

Plenary speaker3.png

Abstract

The classroom is a busy place with teachers presenting and practicing language points and skills, encouraging learners to communicate effectively in groups, whilst paying attention to the social context of language use. In addition, teachers offer skills practice and helpful learning strategies often in relation to the specific goals of the learner, for future study or developing language skills appropriate for the workplace. Similarly, teachers are involved in evaluating learners; sometimes just to give feedback to support learning but also for end of term or end of year evaluations which can have an impact on the learners’ progress, motivation and life chances. This testing activity is often set against the background of high stakes international tests of English such as PTE Academic or IELTS.

Set in the broader context of modern language assessment practices, this paper will look at specific classroom tools which help teachers evaluate students learning and gives comprehensive feedback referencing specific learning objectives and course material designed to have an immediate impact on the learner. The paper will evaluate the use of automated scoring methods based on AI (artificial intelligence) technologies for productive language skills, such as those used in Pearson Benchmark tests and PTE Academic, contrasting them with the traditional models of speaking and writing assessment.

The paper will also look at how using test items which target integrated skills taps into a wider range of language ability traits thereby challenging the construct under-representation of current approaches to testing. This approach adds significant detail and precision essential to high stakes tests such as PTE Academic but also fundamental to tests such as Pearson Benchmark where detailed feedback is needed. These tools and approaches help build a much more comprehensive picture of language learning for both the learner and the teacher and for younger learners, the parent, identifying where the learner could work most actively to improve their language proficiency. The assessment tools are also used in conjunction with certificated tests to reward learners throughout their lifetime of language learning.

Plenary panel discussion 5

Plenary Panel Discussion

4.png

Yan Jin

Professor of applied linguistics at the School of Foreign Languages, Shanghai Jiao Tong University.

Yan Jin is a professor of applied linguistics at the School of Foreign Languages, Shanghai Jiao Tong University. She is currently President of the Asian Association for Language Assessment. Over the past three decades, she has been working on the development and reform of the College English Test (CET), which has a current annual test population of over 20 million. She has been Chair of the National College English Testing Committee since 2004. Her research is mainly focused on the development and validation of large-scale language tests. She is founding co-editor-in-chief of the SpringerOpen journal Language Testing in Asia and is also on the editorial boards of Language Testing, Language Assessment Quarterly, and a number of other international and Chinese journals on foreign language teaching and testing.

Abstract

  Reporting scores (and other relevant performance data) to intended users is a critical part of the test development process (Hambleton & Zenisky, 2013; Zenisky & Hambleton, 2015). In the context of large-scale testing, a score report serves as the primary interface between the test developer and stakeholders (Roberts & Gotch, 2019). Standard 6.10 of the Standards for Educational and Psychological Testing (American Educational Research Association, American Psychological Association & National Council on Measurement in Education, 2014) states that “(W)hen test score information is released, those responsible for testing programs should provide interpretations appropriate to the audience” (p. 119). It is also noted that the interpretations should “describe in simple language what the test covers, what scores represent, the precision/reliability of the scores, and how scores are intended to be used” (ibid.). 
  A criterion-referenced test compares learners’ performances against a predetermined standard, making it easier for test developers to provide direct and explicit score interpretations. In a norm-referenced test, scores are derived by comparing learners’ performances against the norm group. The score interpretation of a norm-referenced test, therefore, presents a challenge to the test developer. The College English Test (CET) is a norm-referenced testing system designed for tertiary-level learners of English in the Chinese mainland. The CET written tests assess listening, reading, writing and translating skills and report a total and component scores as well as percentiles of each score. The CET Spoken English Test (CET-SET) is a separate computer-based test, reporting a graded score with grade descriptions. To make CET scores more meaningful and transparent, the National College English Testing Committee recently conducted an alignment study to link the CET to the Common European Framework of Reference (Council of Europe, 2001, 2018) and the China’s Standards of English Language Ability (Ministry of Education of the People’s Republic of China & National Language Commission of the People’s Republic of China, 2018). In this presentation, I will review the practices of CET score reporting and point out the need for a more granular level of report than is currently provided. I will then present the findings of the linking study, focusing on cut-score decisions for each component of the tests. Finally, I will discuss the implications of the findings for improving CET score reporting and interpretation for different groups of stakeholders.

Plenary panel discussion 6

A trial approach for integrating teaching, learning and assessment of elementary school English education

Keita Nakamura

A researcher and also the manager of research and development sectionat Eiken Foundation of Japan.

Keita Nakamura is a researcher and also the manager of research and development sectionat Eiken Foundation of Japan. Also, he is a doctoral student at Waseda University School of Education working on the application of socio-cognitive framework to validation of TEAP.  He was a member of the Japanese ministry of education’s working group of linking examinations to the CEFR and reported the results of linking projects of EIKEN and TEAP. 

 

Keita.jpg

Abstract

  Language teaching and learning usually have their basis on a curriculum which determines “what to teach” and “how to teach”. Curriculum usually determines the type of assessment to be used. Brown (2008) argued the importance of the process of incorporating assessment into curriculum development. The use of assessment and its impact on teaching and learning is increasingly regarded as important factor because it is the core part of test washback (Green, 2017). 

In Japan, English education at elementary school started nationwide from 2020 with once a week teaching for Grade 3 and 4 students and twice a week teaching for Grade 5 and 6 students. The ministry of education released a course of study in 2017 to help schools develop their own curriculum, but there still remains concern among teachers in terms of “what to teach/assess” and “how to teach/assess”.

  Some desirable principles of materials to be used for young learners of English have been proposed in the previous literature (Hasselgreen, 2005) which include a) tasks should be appealing to the target age group, b) materials should incorporate stakeholders (e.g., teachers and learners), c) tasks and feedbacks should highlight the learners’ strength, d) teachers should be given access to understand basic criteria and methods for assessment, and e) activities in assessment should be good learning activities. 

  Based on the above principles, this study presents a trial approach of development of learning-oriented language           

  assessment and teaching/learning material to resolve such concerns in three steps:
1) Teaching and learning activities appearing on text books were extracted based on the course of study and authorized text books, educational goals of English education for each skill of English (reading, listening, writing, and speaking)
2) Teaching/learning activity plan was developed based on the course of study and textbook information
3) Assessment task (reading and listening) designs were determined based on the demands on the teaching/learning activity 

  Teaching/learning materials include a set of worksheets students can work on in class and teaching guides for teachers to properly use the worksheets. Assessment tasks were designed to assess students’ reading and listening skills. Developed materials and assessment were then trialed to teachers and Grade 6 students in order to get feedback from them. 

  The author will share collected feedback from teachers and students in terms of their perceptions toward the developed materials and assessments in order to better facilitate teaching and learning of English at elementary schools in Japan.

Plenary panel discussion 7

Providing Individualized Learning-Oriented Feedback on Standardized Tests: The case of Dr. GEPT

Rachel Yi-fen Wu

Wu.png

Head of Testing Editorial Department at the Language Training and Testing Center, Taipei.

Rachel Yi-fen Wu holds a PhD in Language Testing and Assessment from Centre for Research in English Language Learning and Assessment (CRELLA), the University of Bedfordshire, UK. She is the Head of Testing Editorial Department at the Language Training and Testing Center, Taipei. She has been closely involved in the research and development, test production and validation of the General English Proficiency Test (GEPT). Her research interests include reading assessment, language test development and validation, and methodological approaches to linking examinations to the CEFR. She is the author of Validating Second Language Reading Examinations: Establishing the Validity of the GEPT through Alignment with the Common European Framework of Reference, Cambridge University Press, 2014.

Abstract

  Standardized tests have long been criticized for their inability to provide individualized feedback to help learners perform better. Since 2021, the General English Proficiency Test (GEPT), a standardized test tailored to learners of English in Taiwan, has added learning features, via a service named Dr. GEPT, to its score reporting service. In addition to sub-test scores, Dr. GEPT provides each test-taker with personalized feedback and offers learning resources and guidance on how to bridge the gap between current test performance and subsequent learning objectives. The introduction of the new service intends to reflect the concept of Learning Oriented Assessment (LOA) in line with the core competency of ‘learner autonomy’ outlined in Taiwan’s new curriculum implemented in 2019 for the 12-year basic education.

  In my presentation, I will cover the following:
- the key features of Dr. GEPT
- the perceived usefulness of the new practice as reported by a recent survey
- details of a longitudinal study exploring the usefulness of the new score reporting service

4
6
3
5
7

Abstract

   One of the key elements for effective learning-oriented assessment lies in carefully-designed assessment tasks that promote teaching and learning (Carless, 2007). In addressing the future of assessment, Bennett (2018) also places an emphasis on creating assessments with tasks designed to guide instruction and learning. The language testing field has long researched the qualities of assessment tasks to ensure that the tasks are an adequate instantiation of the intended construct to assess and lead to valid inferences about test takers’ language abilities (e.g., Bachman, 1990; Mislevy & Yin, 2009; Norris, 2018). With the recent expanded concept of learning-oriented language assessment (Hamp-Lyons, 2017; Jones & Saville, 2016; Purpura & Turner, 2014), the role of assessment tasks and their principled design need to be revisited. 
  In this presentation, I will discuss the role of assessment tasks in relation to the principles and other dimensions of learning-oriented language assessment (LOLA). This discussion will be accompanied by a theory of action as a useful framework to understand the pivotal role of assessment tasks and their relationships with other LOLA aspects. I will also discuss the potential of assessment tasks for learning both in large-scale standardized assessments and in classroom-based assessments. To illustrate this point, I will demonstrate several examples of ETS’s innovative assessment development that is aligned with the core concept of LOLA. I will end this presentation with research directions to validate LOLA for its intended use of assessment for learning. 

bottom of page