The Tim Newfields Homepage Tim Newfields: A Digital Resume
Tokai University Foreign Language Education Center Journal. Vol. 14. (pp. 185 - 190). Oct. 1994.

Oral Proficiency Testing: One Approach for College Classes

by Tim Newfields

To English, Japanese, German, and Spanish Summaries

At the end of each academic year teachers are faced with the necessity of assessing what their students have learned and, in many cases, representing this by a grade. To convert many hours of instruction into a single letter or number is a daunting task. Instructors must contend with the following issues when assigning grades: (1) the pedagogical merit of their evaluation tools, (2) the reliability of their measures, and (3) practical concerns of implementing a given diagnostic procedure. Essentially, these concerns boil down to issues of pedagogy, reliability, and practicality. This paper discusses these issues and suggests one method of oral proficiency testing which may be valuable for large-size classes.

Issues in Oral Proficiency Testing

1. Pedagogical Concerns

Ideally, tests should endeavor to help not only teachers or administrators, but also aid students in assessing their performance. Tests should serve both evaluative and educational functions. Unfortunately, many tests are designed primarily to dispense grades. Typically, students take a test at the end of one semester and receive a grade with no comments a few weeks later. Such feedback is of marginal value. Prompt feedback is crucial, since most students tend to forget the details of their tests soon after completing them. Unless feedback is specific and immediate, its pedagogical value is limited.

Rather than waiting until a battery of questions has been completed before providing feedback, I prefer to provide suggested answers soon after each question has been raised. With the oral examination procedure described in this paper, prompt and point-by-point feedback can be offered without concern of students manipulating test results.

2. Concerns about Reliability

In many EFL classes in Japan, four test are held each academic year. With class time at a premium, many instructors are reluctant to spend time testing rather than teaching. However, as Laing (1991) suggests, the process of testing is much like dart throwing: only after tallying up a large number of scores can we begin to assess an individual's skill. The performance of any given student varies considerably over time. A student whose grade in a class is based on merely a few test results can justifiably feel his or her skill has not been adequately measured. The chance of an inflated or deflated score is high.

In the oral proficiency testing procedure I recommend, six exams are held over the course of an academic year. This figure is high enough to warrant a modest degree of accuracy. Obviously, the more frequently a measure is administrated, the higher its degree of reliability will be.

3. Practical Concerns

A leading concern many teachers have about any test is the relative ease of administration. With up to fifty-five students in a class, few teachers look forward to devoting the hours necessary to assess the performance of their students individually. This leads to an interesting irony: although the best way to assess the oral proficiency may be verbally, the time involved in administrating individual oral exams has prompted many teachers to opt for more convenient testing formats. Perhaps the easiest test to administer is a written examination in a multiple choice or cloze format. However, the extent that such examinations can accurately assess oral proficiency is questionable. Shy students who never speak in an actual conversation often manage to fill in the right blanks on a test sheet.

In lieu of conducting individual oral examinations, some teachers find it more practical to conduct interviews in pairs or small groups. Heltsley and Swanson (1993: 107-110) describe one procedure of conducting oral interviews for paired students. However, administrating paired interviews to classes of over thirty students is unwieldy.

More than a few teachers have chosen to ignore the issue of final examinations entirely and instead base their final grades on quantifiable aspects of classroom performance such as attendance, completed homework, or the cumulative results of mini-quizzes. I regard this as a valid way of measuring classroom performance; it is not a valid measure of English ability. A student who attends class faithfully, turns in the recommended assignments, and makes fledgling attempts to speak might receive a higher grade than one who is occasionally absent - even though the later student may in fact have better oral skills.

The testing procedure I use is takes less than thirty minutes to administer. Moreover, the grading of the exams is part of the administration process. Within moments of completing the exam, students know precisely how well they have done.

^ top of page     continued   >
to home page/resume