NOTE: The article below is mirrored from the JALT Testing & Evaluation SIG website.
PDF SHIKEN: JALT Testing & Evaluation SIG Newsletter Vol. 6 No. 2. Apr. 2002. (p. 2 - 4) [ISSN 1881-5537] PDF

An Interview with Liz Hamp-Lyons

by Tim Newfields

Photo of Liz Hamp-Lyons, c. 2001

Liz Hamp-Lyons is the Chair Professor of English at The Hong Kong Polytechnic University and Director of the Asian Centre for Language Assessment Research. She received her Ph.D. from the University of Edinburgh in 1986. In 1991 her book Assessing Second Language Writing in Academic Settings was published. In 2000 she coauthored a book with William Condon on portfolio assessment. She was a featured speaker at the May 11-12, 2002 language testing conference in Kyoto and become President of the International Language Testing Association in 2003. This interview was conducted electronically in March-April 2002.


You edit two journals: Assessing Writing and the Journal of English for Academic Purposes, which you work on with Ken Hyland. How do these publications differ?

Assessing Writing began in 1994 and, as its name implies, it has been a niche journal, but I hope to make changes which will bring it to the attention of a wider audience. I'm especially interested in writing assessment of users of English as a second language/dialect. Starting this summer, you should notice a more international focus in that publication.
The Journal of English for Academic Purposes is being launched this spring because of the enormous growth of EAP teaching and research, and – again as its name implies – it will be a journal of very wide appeal to those teaching English for academic uses in universities and colleges, and to those researching academic English.
Free sample copies of either publication are available from the Elsevier website at www.elsevier.com.

Can you mention how you became interested in testing?

That requires a long answer! I've been teaching since 1970, first in the UK. I got into EFL after encountering many NNS in class, so I trained to teach them. I taught in various countries, and then began to think about a Ph.D. I won a fellowship at the University of Edinburgh, where I became the project coordinator for the ELTS Validation Study being carried out by Alan Davies and Clive Criper in 1982-1986. For a while I focused on the ELTS writing exams, and also designed the new scales for ELTS as a consultancy for the British Council. I then worked at the University of Michigan as Associate Director for Assessment in the English Composition Board, at that time one of the premier writing programmes in the US. I was also a consultant to Michigan ELI's Testing and Certification Division (MELAB being their main test). At the ECB we experimented with and implemented portfolio assessment, and so I began to write about non-traditional as well as traditional forms of language assessment.
When I moved to the University of Colorado I took my belief in portfolios with me. I became Director of Composition in the English Department and introduced portfolio assessment, developing a programme of assessment training for teaching assistants there. When I came to Hong Kong as head of department, I found a semi-abandoned research project with a considerable amount of money for developing a test for graduating tertiary students, intended to provide language proficiency information to employers. I revived the project and brought in top-level consultants to complete it. That project's resulting test became The Hong Kong Polytechnic University's own test for graduating students (called the "GSLPA"). That is now being considered for implementation across Hong Kong. GSLPA is applicable wherever employers want to know about the abilities of their potential graduate employees to use written and spoken English in a broad range of business contexts.
Why am I interested in all this? Because I see so much testing in the world, and see how badly most of it is done. I also see the faith many people put in tests and test results, and the consequences for people when test results are used to make decisions about their lives.

What are your current research interests?

I try to maintain fairly wide interests. I am finishing a couple of research projects. One looks at aspects of teachers' professional development, focusing on process writing. Another is on teacher reflection. I have doctoral students working in self-access, programme evaluation, self-directed learning, and critical thinking in the L2 senior school classroom. Others are working on self-assessment in distance education, in the assessment of specific purpose reading, and on washback and impact studies. I also have two research projects developing online specific-purpose assessments, and one (with Alan Davies) looking at English proficiency from the perspective of language norms. I spend a lot of time thinking, reading and writing about writing assessment; and even more thinking about ethics in language assessment.

[ p. 2 ]

The language norms project sounds interesting and relevant to us in Japan. Could you elaborate briefly on it?

There are quite a few very big English language tests that are used all around the world to make decisions about people's futures: TOEFL, TOEIC and IELTS come immediately to mind. These tests are each designed with a particular "norm" of English in mind – American English or British English. But many learners of English are taught by non-native speaking English teachers, and live in countries where English is common, but the "standard" AmE or BrE variety is not used. This is true of Singapore and Hong Kong. In our research Alan Davies and I are investigating the language of some major English tests, both international ones and so-called "local" ones like China's College English Test, which is taken by 2 million students every year. We would like to hear the views of ELT experts in Japan about whether international tests are 'fair' to Japanese users of English by testing only the language variety spoken/written in Japan; whether locally-developed and used tests are 'fair' in the same way; and whether this is seem to be an issue. In our research we are currently looking at "norms" for testing in Hong Kong, Singapore, India and China, but we would be glad to expand the project later.

Could you also highlight some of your concerns about ethics in language testing?

The discussion of "norms" and world Englishes above is a good example of the kinds of concerns that all professional language testers must think about. We are all much more aware than we used be of the ways that tests affect people's lives, and of the impossibility of designing and adminstering tests that are completely reliable and also reasonably valid. All tests are a best approximation of what we aim to do. An ethical approach to language testing requires us to make clear the limitations of our tests to everyone involved – not only test takers, but their parents, their teachers, school administrations, and political decision makers. These days, it seems that every educational problem has a test thrown at it as a solution. Not only doesn't that work: it is not fair on the people concerned; it is not 'ethical.' Readers might want to read more on the issues of ethics in language testing in some of the publications below:

Hamp-Lyons, L (2000). Social, professional and individual responsibility in language testing. System, 28. 579-591.

Hamp-Lyons, L. (2000). Fairnesses in language testing. In. A. J. Kunnan, (Ed.), Fairness and Validation in Language Assessment, Studies in Language Testing 9, pp. 99-104. Cambridge, UK: Cambridge University Press.

Hamp-Lyons, L. (1999). Implications of the "examination culture" for (English language) education in Hong Kong. In V. Crew, V. Berry & J. Hung (Eds.), Exploring diversity in the language curriculum, pp. 133-141. Pub. Hong Kong Institute of Education.

Hamp-Lyons, L. (1997). Ethics and language testing. In Caroline Clapham, (Ed.), The Encyclopedia of Language and Education: Vol. 7: Language Testing and Assessment, Chapter 32. pp. 323-333. Series Ed. David Corson. Dordrecht, Netherlands: Kluwerook.

Hamp-Lyons, L. (1996). Applying ethical standards to portfolio assessment of writing in English as second language. In M. Milanovich and N. Saville (Eds.), Performance Testing and Assessment: Selected Papers from the 15th Language Testing Research Colloquium. pp. 151-164. Cambridge: Cambridge University Press.

How did the Asian Centre for Language Assessment Research get started and what are its current research projects?

ACLAR got started when our first assessment research project was adopted by our university as the operational exit assessment in English for our own students. The test generated a lot of University findings and we decided to build a research centre around it, moving out into other testing research, trying to get doctoral students in assessment, applying for other grants, consultancy, etc.

[ p. 3 ]

Some of our research is proprietary and therefore confidential, but we have Government grants in: online context-led English language assessment – focusing on accountancy; online diagnostic language needs / assessment for company secretaries; comparing the GSLPA (our own test) with IELTS; standards and norms of English proficiency tests. We have doctoral students in language testing and teach courses in that area. We have academic visitors in the field; and take the opportunities that arise to build our expertise and networks. We also run workshops and seminars in assessment-related areas for local teachers.

Tell us about the December 2002 LTRC conference, which you are helping to organize.

The International Language Testing Research Colloquium (LTRC) has been held every year since 1978. It is a gathering of language testing researchers, academics, and representatives of big agencies as well as test developers/researchers who work for smaller enterprises. Most papers at this conference are about English language assessment, but we are not limited to that. In recent years LTRC has become the official conference of the International Language Testing Association (ILTA) and ILTA's ABM is held there. The present President of ILTA is Fred Davidson of the University of Illinois at Urbana-Champaign, and I am the Vice President. This year's LTRC will be held in Hong Kong, hosted by my Centre on Dec 12-15: that's immediately before AILA, which will be in Singapore. We hope to attract people from all round the world, and so far interest is very keen. Although we'll have three days of papers, there will be plenty of eating, drinking and sightseeing, so we hope overseas visitors will have a wonderful time intellectually and socially.

You will be giving a presentation about a computer-based context-led language assessment system at an upcoming language testing conference in Kyoto. Could you summarize some of the main points you wish to cover?

The project we are presenting concerns an assessment research project. The research team includes myself, Tom Lumley, Janet Hamilton and Jane Lockwood. In this project, we studied the accountancy profession, looked at how the language of that profession could be assessed, and at how and whether the assessment could be delivered on-line. We have developed a website for test delivery, and will be showing how the website works. The test is secure and so we cannot let anyone know the web address, at least for now. Once the full system is up and running potential test-takers will be able to access a non-secure portion of the website to learn how the test works. Those who attend the presentations will have an opportunity to see how the system works and the kinds of test items it includes.


Newsletter: Topic IndexAuthor IndexTitle IndexDate Index
TEVAL SIG: Main Page Background Links Network Join
THE TIM NEWFIELDS HOMEPAGE
Categorical Index Subject Index Title Index
last Main Page next
HTML: http://www.tnewfields.info/Articles/ham_new.htm   /   PDF: http:www.tnewfields.info/Articles/PDF/intHamp-Lyons.pdf

[ p. 4 ]