NOTE: The article below is mirrored from the JALT Testing & Evaluation SIG website.

Shiken: JALT Testing & Evaluation SIG Newsletter. Vol. 7 No. 1. March 2003. (p. 10 - 13) [ISSN 1881-5537] PDF Version

An Interview with J.D. Brown

by Tim Newfields

Photo of JD Brown, c. 2000 James Dean ("J.D.") Brown, is a professor in the ESL Department at the University of University of Hawai'i at Manoa. He has authored many books on language testing, curriculum design, program evaluation, research methods, and a summary of his publications is available at˜7Ebrownj/. JD has also served on the editorial boards of the TESOL Quarterly, JALT Journal, and Language Testing as well as on the TOEFL Research Committee, and the Executive Board of TESOL and taught extensively in France, the People's Republic of China, Saudi Arabia, Japan, Brazil, Venezuela, and the USA. This interview was conducted electronically in Sept. 2002.

How did you first get involved in testing and how have your testing interests changed since then?

It all started when I was about two years old and my dad put an E-flat clarinet in my wee little hands.
I guess he had visions of a child prodigy Benny Goodman or something. At any rate, I expressed my individuality by rebelling when I was in the second grade and taking up the trumpet. It took my dad a couple of years to get me interested in playing the French horn instead, and a decade or more before I understood the wisdom of that decision. Anyway, at about eight, I became a French horn player. The long and short of it is that I became a pretty good musician and got a full ride to the Oberlin Conservatory of Music despite my lackluster grades in high school. After flunking out of Oberlin for "academic reasons" (like not being able to pass English literature, psychology 101, US history, and so forth), spending three years in the army, being a hippy for about a year, and finding out that, somewhere along the line, I had lost all hearing above 6000 cycles, I changed my major from French horn to French. The story of how I gradually edged toward the ESL field is similar to the stories of many in our field, so I won't repeat it here.
By hook or crook, I ended up at UCLA doing an MA degree in TESL. One day in my first ESL/EFL teaching methods course, the prof, Russ Campbell, was talking about things to do career-wise in the field. He included things like teaching ESL, teaching EFL overseas, directing a program, materials development, and so forth. But I was looking for something like the French horn. Let me explain: if you play the French horn, the oboe, the bassoon, or the viola, you can be pretty sure there will always be a place for you in the orchestra largely because most sensible people want to play the trumpet, flute, violin, or clarinet. So what I was looking for in applied linguistics was that corner of the field where I would be pretty sure to flourish because it was a bit odd. I was looking for the French horn. Russ continued his discussion by suggesting that curriculum development was another area that might prove interesting, or advising foreign students, and so forth. I still didn't hear the French horn-like area that might work for me. As an afterthought, Russ said something like, "Oh yeah, and you can also do language testing, but most language teachers aren't interested in that because it involves lots of statistics and math." Given that I was always pretty good at math and that nobody else wanted to do this particular thing, I figured it might just turn out to be the French horn I had been looking for. The next semester, I took a testing class in our department and a stats class in the School of Education. The rest is history. I guess that's the long answer to your question. I quite consciously got into language testing because it seemed like the French horn to me.

You have conducted many training programs for people wanting to learn more about testing. What do you feel is the single most important concept for teachers in the field to remember about testing?

To me the most important concept for teachers and administrators alike is the distinction between norm-referenced and criterion-referenced testing that I have explained in a number of my publications. In terms of the important decisions that most language teaching professionals make about their students' lives, doing so with the right form of referencing is absolutely essential. Far too many people want to buy (or photocopy) a test that will serve them as a proficiency/placement/diagnostic/achievement test. That very dream indicates they have no idea what those four functions of testing are or how very different they are from each other in terms of purpose, ranges of ability, content, focus, score distributions, item analysis techniques, and so forth. Understanding the concepts of norm-referenced and criterion-referenced testing is essential to understanding the basic decision making purposes of tests in general, and language tests in particular, so I would identify that as the single most important testing concept for language teachers and administrators to know.

Are there any trends in the field of language testing which especially concern you?

In the field as a whole, I would say that I am most bothered by smug discussion of "the alternatives to language testing" — discussions that assume that such approaches are outside of language testing and somehow automatically superior to general language testing. As I wrote with Thom Hudson in a TESOL Quarterly article a few years ago, I think such testing types as portfolios, conferences, self-assessments (all included in a book I edited for TESOL entitled New Ways of Classroom Assessment) are great, but that we must necessarily view them not as alternatives to language testing, but rather as alternatives within language testing.

[ p. 9 ]

I am also bothered by notions of critical language testing and other politically correct/trendy ideas that surfaced last year in a special issue of Language Testing. But then, that's probably just because I'm a hopelessly fossilized positivist, who (like most academics worldwide) still believes in the quaint old-fashioned idea called the scientific method. At the same time, I must admit that that was the only issue of Language Testing that I have ever read cover to cover, so obviously I found something of interest in each and every article.

What is meant by the concept of "assessment literacy"? Would you say that the level of "assessment literacy" among most EFL teachers in Asia is changing?

"Assessment literacy" to me would mean having the ability to interpret and understand test results in a logical way. A person who is assessment literate should be able to read a headline like "Japan 149th on the TOEFL" (which I actually saw in The Japan Times some years back) and understand that the article which follows is nonsense because it claims that being 149th in the world means that English teaching in Japan is terrible. So often, assessment information is sensationalized by the press and even the educated public is unable to understand that sensationalization for what it is. To me that means there is a serious "assessment literacy" problem in Asia and everywhere else I've lived, including the United States. In my opinion, "assessment literacy" should be part of every educated person's repertoire.

You are well known for your comments on the Japanese university entrance exam system. Can you think of any university in Japan which has particularly well designed entrance exams? If so, what did they do differently from other schools?

I wouldn't want to go there. Let me just say the university English language entrance exams that I've seen in Japan, and I've seen hundreds, have generally been very amateurish. Wherever educational testers or language testers have been involved in designing tests in Japan, the tests have been much better than those amateurish efforts found at most Japanese universities. A specific example that I can mention is the Center Exam, which I feel has been making great efforts to improve itself, and has been succeeding, perhaps because professional testers have been involved.
I don't think that I have to repeat my view that the use of amateurish exams borders on criminal behavior, especially in Japan where the motivation for doing the testing seems to be to make money. But that said, it also appears that the entrance exams are becoming increasingly irrelevant because of the demographic changes taking place in Japan. So few students and so many universities. And now, so many different ways to get into Japanese universities: recommendation admissions, special "returnee" admissions procedures, admissions from associated feeder high schools, transferring from campuses in other countries, and so forth.
The central problem may no longer be the university entrance examinations; the problem may be in deciding what a degree from a Japanese university is worth (see for instance, Brian McVeigh's recent book, called Japanese higher education as myth). Consider: at one time, university graduates were valuable because they were among those select few who had passed a university entrance exam. The fact that many students coasted through university after passing the exam (with no threat of failing) did not change that fact, so Japanese university degrees had some meaning (i.e., that the students had at least jumped the one hurdle of an entrance exam). Now however, with the specter of all those students who want to go to university being accepted somewhere and all those students graduating, you have to ask yourself what value the degrees will have, if any. What I mean is: if all students are accepted and all are graduated, the degree simply means that the students were able to stay in one place for four years or so. Does that mean they are "educated", or simply that they are "obedient"? Anyway, that is a Japanese problem, and since I don't want to be perceived as a "cultural imperialist", I will simply stop with my last question about obedience.

What areas of language assessment would you like to see more studies devoted towards?

Internationally, I think it is inevitable that more and more studies will be done on performance testing in its many forms. If we want to teach students to actually communicate in a second language, we cannot continue to restrict ourselves to the relatively easy testing of their language knowledge (with grammar and vocabulary tests), or even their receptive language skills (with listening and reading tests). We will have to face the many challenges of testing their performance in written and oral communication.
In Japan, of course, the testing in many institutions is so steeped in yakudoku (translation-reading) that testing anything beyond language knowledge (grammar, vocabulary, and translation) would be a step forward. Even testing the receptive skill of listening in an effective manner would take considerable political clout and would be considered innovative. Funny, yeah?

What sort of projects are on your event horizon? Can you briefly describe any books are you currently working on?

Well, the last two years have been very productive for me. In 2001, Cambridge published Using surveys in language programs; the Foreign Language Teaching and Research Press & the People's Education Press in Beijing published a Chinese edition of my 1988 Cambridge book (Research methods in foreign language teaching: Guide for reading educational statistics); and UH Press published a collection of testing articles edited by Thom Hudson and me. This year, Cambridge published Criterion-referenced language testing, which I co-authored with Thom Hudson; Oxford published Doing second language research, which I co-authored with Ted Rodgers; and UH Press published Investigating second language performance assessments, which I co-authored with Thom Hudson, John Norris, and Bill Bonk. So you would think I would be able to take a year or two off. But unfortunately, these things seem to take on a life of their own. Because Prentice-Hall inexplicably took Testing in language programs out of print after only five years, I find myself rewriting that 1996 book to publish it elsewhere. I'm rewriting it to make it more overtly spreadsheet oriented. I am also working on three other book projects outside of the testing area, in fact one of them is outside the area of applied linguistics. But, I would rather not talk about them until they are closer to fruition. I don't want to jinx them. Anyway, I manage to keep busy.


McVeigh, B. (2002). Japanese higher education as myth. Armonk, NY: M. E. Sharpe, Inc.

[ p. 10 ]

NEWSLETTER: Topic IndexAuthor IndexTitle IndexDate Index
TEVAL SIG: Main Page Background Links Network Join

Categorical Index Subject Index Title Index

HTML:   /   PDF: