The Educational Interpreter Performance Assessment: Current Structure and Practices
Brenda Schick and Kevin T. Williams
The Educational Interpreter Performance Assessment (EIPA) is a process that is designed to evaluate the interpreting skills of educational interpreters in a classroom setting (Schick and Williams 1992). The EIPA is not limited to any one signed language or sign system, which is essential given the diverse signed languages that are used in the public schools. The tool can be used to evaluate interpreters who use Manually Coded English, or MCE (English-like signing); ASL, typically viewed as the signed language of the adult Deaf1 community; or PSE, the type of English signing found among those in the adult Deaf community (Bornstein 1990; Lucas and Valli 1989).2 In addition, different versions of the EIPA are used for interpreters who work in an elementary school and those who work in a secondary setting. In either version of the test, videotaped stimulus materials are used to collect two samples of the interpreter’s work. One sample is of the interpreter’s voice-to-sign skills, either translating or transliterating spoken English in the classroom environment into sign communication. The second sample is of the interpreter’s sign-to-voice skills, either translating or transliterating what a deaf child signs into spoken English. A specially trained evaluation team, using an EIPA rating form, evaluates both samples. The process is described in more detail in the following sections, and a profile of skills at each level of the EIPA is shown in the appendix.
THE NEED FOR A TOOL THAT SPECIFICALLY EVALUATES EDUCATIONAL INTERPRETERS
For interpreters who work with adults, certification processes have been established that ensure interpreting competency. The Registry of Interpreters for the Deaf, Inc. (RID), a national organization that has a national testing system, administers an evaluation that provides interpreters with a certificate of competency, or RID certification. Previously, the National Association of the Deaf (NAD) also had a certification system.3 These certifications are often a requirement for employment in certain settings such as universities. For interpreters working with adults, RID or NAD certification ensures a minimal level of competency.
However, RID certification does not assess how well interpreters work with children and within a K-12 setting. Many differences can be found between interpreting for an adult and interpreting for a child. For example, classroom interaction is notably different from either an adult giving a lecture or two adults talking with each other, which are scenarios used by RID and NAD in their certification tests. Classroom interaction involves a variety of register shifts, often within a single speaker’s turn. Also unique to educational settings are the narrative styles used by teachers to support and encourage linguistic and cognitive development. Expressing these register shifts and language variations is essential to providing message equivalency, and most likely, register affects aspects of cognitive development (see chapter 4). In sign language, these register shifts are often represented by changes in prosody, the rhythmic and intonational aspects of language, and other nonmanual behaviors. As described in chapter 4, educational interpreters have a great deal of difficulty representing register, particularly those interpreting at the elementary level where a great deal of shifting in register occurs. Current tests designed to assess adults do not provide the kinds of challenges in communicating register that we see in a typical classroom.
In addition, to varying degrees, teachers use an adult-to-child register, especially with young children who are still learning to process language. An interpreter’s product needs to represent this adult-to-child register. Sign language has its own linguistic devices for adult-to-child register (Kantor 1982; Masataka 1992; Reilly and Bellugi 1996). Research with hearing children shows that this adult-to-child register, communicated through prosody, is critically important to young language learners (Fernald 1989; Fernald and McRoberts 1996; Kemler-Nelson, Hirsh-Pasek, Jusczyk, and Wright-Cassidy 1989). The enhanced changes in prosody, as compared with adult-to-adult register, may help a child better identify sentence and clause boundaries, key words, types of discourse, and discourse boundaries. Adult-to-child register also may provide a great deal of information about the meaning of our communication, for example, the speaker’s intentions such as teasing, warning, or soothing. Children use the enhanced forms of prosodic contours found in adult-to-adult register to help determine the meaning of our communication even when they may not know the meanings of the words or understand the grammar (Moore, Spence, and Katz 1997). We should expect to find a similar relationship between adult-to-child register in signed language and language development. The EIPA uses actual classroom interaction across a range of ages of children that represents a broad variation in adult-to-child register and that must be represented in the interpreting product.
Another issue about classroom interaction is that it often involves a great deal of language in which the form does not match the function, for example, a teacher saying, “I think you should look at that answer again,” which really means, “You are wrong.” This mismatching does occur in adult-to-adult communication but not very frequently in the exchange of factual information, which is the context used in the RID assessment. In addition to lecturing about facts, classroom teachers also are supporting and encouraging language development, cognitive development, and social development; are teaching appropriate behavior; are bonding with the students, and so forth. Consequently, in the classroom, a great deal of language occurs in which the teacher’s intention may be more important than the content of the message. To interpret this kind of communication, an interpreter must have good control over the prosodic aspect of the language to communicate intention and not just form. The EIPA voice-to-sign classroom testing materials contain a great deal of language for which the interpreter needs to process the teacher’s communicative intention, and not just the form.
Finally, children sign much differently than adults. For example, children make more articulation errors and often sign less clearly than adults. They also fail to provide background information to help the speaker understand a message, and their discourse may not be well structured. Note, too, that hearing children also produce speech and language that is not as clear or as well structured as that of adults. As with spoken language, individuals differ in their ability to understand children’s speech or children’s signed language. Adults who have experience working with a variety of children are capable of understanding children even when their speech and language contains numerous errors. Interpreters who work with children also must be able to understand a child’s signing despite age-appropriate errors. The ability to understand language that is not well structured is especially important given that many deaf children in public schools have at least some degree of language delay.
The EIPA also assesses how well an interpreter can use MCE. Of note is the fact that, although only a small percentage of educational interpreters request an evaluation in MCE (about 12 percent), it is still used in some public schools. Although many professionals and members of the deaf community do not believe that MCE signing is appropriate or successful with deaf children (Johnson, Liddell, and Erting 1989; Supalla 1992; Woodward and Allen 1988), after the federally legislated Individuals with Disabilities Education Act of 1990 (IDEA), schools and parents retain the right to decide what type of signing is used with a child. RID or NAD certification is designed only for those interpreters using ASL or some form of transliteration. Interpreters who use a form of MCE would not be able to pass the test. Thus, the RID or NAD test would not accurately assess what many educational interpreters are expected to do in the classroom. Because of this reason and others, too, many educational interpreters and school systems do not see RID or NAD certification as appropriate or obtainable.
The EIPA was designed to evaluate and to weigh during assessment those aspects of interpreting that are necessary to support language and cognitive development. As discussed previously, the classroom is using language and discourse in a manner that scaffolds language and cognitive development in the hearing students. It is very important that a tool used to evaluate educational interpreters assesses how well this special adult-to-child register and discourse is represented in the interpreting product. Assessing how well an interpreter can represent adult-to-adult register and discourse will not provide a true picture of how well the interpreter performs in a school classroom.
STRUCTURE OF THE EIPA
The EIPA consists of formal assessment materials and a process of evaluation. The following sections describe the materials and procedures in an EIPA assessment.
Structure of the Stimulus Materials
All of the samples of the interpreter’s voice-to-sign and sign-to-voice skills are obtained using stimulus videotapes, summarized in Table 1. Two options of the exam are available (Options A and B) for each grade level and language. An interpreter could take the EIPA once using either A or B and then take the other option another time. Option A and B each contain different child tapes and different classroom tapes. Thus, an interpreter can choose a grade level and a language, using either Option A or Option B assessment tapes. The sign-to-voice stimulus tapes show a child or a teenager using the target signed language or sign system (ASL, PSE, or MCE). The voice-to-sign videotape shows actual classroom lessons, along with questions and comments from other children in the classroom, and each voice-to-sign tape contains examples of multiple classrooms, as shown in Table 1. All tapes were produced using professional quality video, were filmed by a professional videographer, and were edited in a studio.
TABLE 1 Stimulus Tapes Used to Collect Samples for the EIPA
| Level | Sign-To-Voice Stimulus Tapes Options A and B | Voice-To-Sign Stimulus Tapes Options A and B |
| Elementary | Child signer using ASL Child signer using PSE Child signer using MCE | Five elementary classrooms, from first to sixth grade |
| Secondary | Teen signer using ASL Teen signer using PSE Teen signer using MCE | Two secondary classrooms |
For these sign-to-voice stimulus tapes, the children were interviewed using a technique that maximizes complex responses and language. In essence, the interviewer used techniques typically used in a language proficiency interview, asking complex questions as well as asking children to expand and give their opinions, techniques that have been found to elicit language from children that is more complex than what occurs when simply chatting (Schick 1997).
Like with all children, the children’s language may contain errors in grammar and pronunciation, disorganizations in their communication and discourse cohesion, fingerspelling that is both precise and imprecise, and references to people and places that are not explicitly identified. The interviewer was unknown to the children, so theoretically, the children should have properly introduced referents. However, like many children, they did not always do so, which was especially true for the elementary-aged children. The language produced by these children reflects what educational interpreters encounter daily. Interpreters who are familiar with how children sign and who understand them despite typical language errors are able to understand an unfamiliar child on a videotape. The stimulus videotapes contain the interviewer’s questions in spoken English, and the interpreter is asked to interpret the children’s responses. Interpreters are given a warm-up period during which they have the opportunity to watch the child sign without having to interpret. Then they are signaled to begin interpreting, and their interpretation is videotaped for later assessment.
As with the sign-to-voice tapes, two sets of voice-to-sign tapes are used, elementary and secondary. The elementary stimulus tapes include five different, authentic classrooms, ranging from first to sixth grade. All classroom content is challenging, containing lessons in science, reading, geography, or other complex subjects. Reflecting typical classrooms, all lessons are interactive, containing teacher narration and teacher-student dialogue, both requiring interpretation. The classrooms have frequent exchanges wherein the student and the teacher co-construct meaning across several turns, so interpreters must represent not only the content but also who is speaking. Similarly, there are many instances in which numerous children are speaking at once and interpreters need to make decisions concerning which aspects of the communication are essential to the goals of the main lesson. The tapes include frequent interchanges that question, discipline, scold, praise, warn, and challenge as well as the traditional exchanges of information.
Before watching and interpreting the stimulus tape, interpreters are given a set of lesson plans for what they will interpret. These plans contain the goals and objectives of each lesson as well as key vocabulary. This exposure to the lesson plan is intended to reflect best practices where all interpreters should know basic information before interpreting. As with the sign-to-voice stimulus tapes, interpreters are provided a warm-up period during which they can simply watch the classroom and listen to the teacher. The classroom tapes were selected to provide opportunities for fingerspelling, use of numbers, spatial mapping, and complex grammar. Teachers in the videotapes often backtrack in their discourse, repair their own statements, self-reflect, and give clues about what may be tested in the future.
Structure of the Rating Form
The EIPA uses a specially designed rating form that contains four broad areas of evaluation: voice-to-sign interpreting skills, sign-to-voice interpreting, vocabulary, and overall abilities. Table 2 shows an outline of the skills that are assessed. Afive-point numerical scale is used to rate the specific skills. The interpretation is rated for use of prosody across several domains such as prosody to stress words and phrases as well as prosody to communicate affect, emotions, sentence boundaries, and register. Specific items rate the use of space (a) for morphological purposes such as verb agreement and (b) for discourse purposes such as those in comparisons and other forms of spatial mapping. The interpreting product is also evaluated on the correctness of grammatical production, articulation of signs, fluency, and fingerspelling. The amount of vocabulary is rated as to whether the interpreter appears to have a broad and complex vocabulary of signs or whether the interpreted message is affected by the lack of vocabulary knowledge. For sign-to-voice interpreting, the interpreting product is rated on how well the interpreter expresses aspects of register, prosody, and linguistic stress. The interpreter is also rated as to how well he or she understands the grammar, morphology, and vocabulary as well as his or her ability to select appropriate English vocabulary to represent signed language concepts. Finally, to ensure that the interpretation has a sense of a whole message, more global factors are evaluated, for example, whether the interpreter demonstrates sufficient processing time to be able to understand what is being communicated or whether the interpreter consistently indicates who is speaking.
Within each area, the interpreter is rated in approximately ten distinct areas, using a Likert Scale ranging from 0 (no skills demonstrated) to 5 (advanced). An average is calculated from the ratings of the individual items across the results of a team of three evaluators. A useful analogy can be made to grading in traditional educational settings where a student earning an “A” has generally mastered the content area being tested or sampled, a student earning an “F” has generally not mastered the content, and a student earning a “C” has demonstrated “islands” of abilities but still has holes or gaps in his or her learning. In like manner, this description of grading applies to EIPA ratings: An “A” would be like the EIPA level 5; the “F,” like an EIPA level 0/1; and the “C,” like an EIPA level 3. Thus, an interpreter receiving a state’s standard of 3.5 demonstrates a C+ in interpreting skills. An interpreter who receives a level 3.5 is still making numerous errors, omissions, and distortions in his or her interpretation. Typically, these errors occur throughout the interpretation; the interpreter does not simply represent the most important information, omitting only what is less important. Basically, a child who has an interpreter at this level is not receiving the same information as his or her hearing peers.
TABLE 2 Domains of Skills and Specific Skills Evaluated in the EIPA
| Category | Skill |
| I. Interpreter Product—Voice-to-Sign | |
| Prosodic Information | A. Stresses or emphasizes important words or phrases |
| B. Appropriately uses face and body to express affect or emotions | |
| C. Expresses register | |
| D. Marks sentence boundaries (not run-on) | |
| Nonmanual Information | E. Indicates sentence types or clausal boundaries |
| F. Produces and uses nonmanual adverbial or adjectival markers | |
| Use of Signing Space | G. Uses verb directionality and pronominal system |
| H. Indicates comparison and contrast as well as sequence and cause-effect | |
| I. Uses ASL classifier system to show location or relationship | |
| Interpreter Performance | J. Follows grammar of ASL or PSE (if appropriate) |
| K. Uses English morphological markers (if appropriate) | |
| L. Clearly mouths speaker’s English (if appropriate) | |
| II. Interpreter Product—Sign-to-Voice (e.g., fluency, pacing, clarity of speech, volume of speech) | |
| Can Read and Express Signer’s— | A. Signs |
| B. Fingerspelling and numbers | |
| C. Register | |
| D. Nonmanual behaviors and ASL morphology | |
| Vocal-Intonational Features | E. Demonstrates appropriate speech production (rate, rhythm, fluency, volume) |
| F. Indicates sentence or clausal boundaries (not “run-on” speech) | |
| G. Indicates sentence types | |
| H. Emphasizes important words, phrases, affect-emotions | |
| I. Selects correct English words | |
| Interpreter Performance | J. Adds no extraneous words or sounds to message |
| III. Vocabulary | |
| Signs | A. Demonstrates appropriate amount of sign vocabulary |
| B. Forms signs correctly | |
| C. Demonstrates fluency (rhythm and rate) | |
| D. Uses vocabulary consistent with the sign language or system | |
| E. Represents key vocabulary | |
| Fingerspelling | F. Correctly produces fingerspelling |
| G. Produces correct spelling | |
| H. Uses fingerspelling appropriately | |
| I. Correctly produces numbers | |
| IV. Overall Factors | |
| Message Processing | A. Demonstrates appropriate eye contact or movement |
| B. Produces a developed sense of the whole message V-S | |
| C. Produces a developed sense of the whole message S-V | |
| D. Demonstrates appropriate process lag time V-S | |
| E. Demonstrates appropriate process lag time S-V | |
| Message Clarity | F. Follows principles of discourse mapping |
| Environment | G. Indicates who is speaking |
Structure and Expertise of the Rating Team
Three raters work simultaneously as a team to evaluate the videotape; one member of the team must be deaf. All hearing raters are RID certified and most possess graduate-level educational degrees. Deaf raters all have postsecondary education, and many are native signers. In addition, they must be proficient in the signed language or sign system being rated. All raters undergo more than forty hours of direct instruction related to the EIPA assessment tool, stimuli materials, curricular design, and English discourse styles used in educational settings. All evaluators undergo training in the role of pragmatics and prosody related to the interpreting process. A specially designed training and rating manual has been authored by the EIPA developers. Videotape materials used during training are professionally captioned. At the completion of this training, all raters work as observers for an additional mentoring period until their judgments are accurate and their observations can be articulated in an appropriate manner. The manual used during training is the same manual raters actually use each and every time they rate a candidate’s performance.
Feedback to the Interpreter
Each interpreter receives extensive feedback from the evaluation. He or she receives a copy of the rating form, with the averaged score for each rated item, and an average overall score. In addition, he or she receives written feedback concerning strengths and areas of need. Finally, the interpreter receives suggestions about which overall areas are in need of development, in particular, those areas that would help the interpreter most improve his or her abilities. For example, many interpreters do not effectively use spatial mapping. Improving this one area would improve several domains of interpreting.
This detailed feedback helps interpreters and interpreter educators know exactly where strengths and weaknesses are so they can better plan professional development. The EIPA report can serve as the basis for an interpreter’s professional development plan, focusing on areas of skill development for either working with a mentor or planning in-service training. One common problem that educational interpreters face when they receive their EIPA reports is that they may not understand some of the technical language used to describe strengths and areas in need of skill development. A glossary is provided, but some concepts require more than a definition of a term. For example, recipients often do not understand the terminology used to describe how ASL uses space for discourse purposes, or they do not know what prosody is nor how it is communicated in sign language. Working with a skilled mentor would help an interpreter translate the EIPA evaluation into a professional development plan to build skills that would improve the interpreting product and process across many domains.
Schools could also use the EIPA to plan in-service training for a group of interpreters. A skilled professional can use the results of the EIPA for a group of interpreters to determine whether the entire group has in common domains of skills that could benefit from instruction. Or a skilled mentor could help establish mentoring relationships among the interpreters in the schools by pairing an interpreter who has scored high in a particular domain with an interpreter who needs to develop skills in that particular area.
Assessing Content Knowledge Related to Interpreting
The EIPA is a performance test in that it evaluates the interpreting performance of an individual. However, the content knowledge essential to working with children or to working in the K-12 setting is broad. Educational interpreters, in addition to demonstrating excellent performance skills, must know basic information about their role and responsibility not only as an interpreter but also as a member of a child’s educational team and as a professional working in a public school. They should also know information about language development, reading, child development, the IEP process, hearing loss and hearing aids, Deaf culture, signed language, professional ethics, linguistics, and interpreting. Many interpreters also must know information about tutoring because they are often required to fulfill this role. Although educational interpreters cannot know this content to the same degree as the classroom educator, the deaf educator, or the speech pathologist, they still must understand how to work with these professionals to carry out a child’s educational program. They must also be able to communicate with the educational team about their perceptions of how the child is doing.
To assess an interpreter’s understanding of basic content knowledge related to working with children, a written test is currently being developed (the EIPA: Written Test, or EIPA:WT) that will assess interpreters’ knowledge of a variety of domains. The process of test development will ensure that the EIPA:WT will have good psychometric validity. One intent of content validity is to ensure that the range of information being tested reflects what experts in the field agree is essential for an educational interpreter to know. To this end, a large set of facts were written, representing a basic, standards-based curriculum. These facts were rated by a large number of content experts. Questions were written based on these facts. The advantage of this test development model is that the content standards can be widely disseminated so interpreters will know generally what is on the test, but the exact questions will remain confidential. When this test is completed, states agencies and schools will have the option of requiring this test as part of the EIPA evaluation process.
Results from Research Using the EIPA
Research has been conducted on a large group of educational interpreters in the state of Colorado (Schick, Williams, and Bolster 2000), using an older form of the EIPA in which interpreters were videotaped in their own classrooms with students for whom they interpreted regularly. The school districts that participated volunteered for the project. The data show that the majority of interpreters would not meet minimal Colorado performance standards, which require an overall EIPA score of 3.5 or greater. Nearly two-thirds of the group scored below a level 3.5 overall, which means that these interpreters are still making a significant number of errors, particularly with more complex language and discourse (see appendix). An interpreter at this level needs continued supervision and should be required to participate in continuing education in interpreting. Teachers and parents of a child whose interpreter scores in this range should be aware that a child’s misunderstandings of concepts presented in a lesson may be the result of a poor interpretation rather than problems with the child. However, an interpreter who scores at least at a 3.5 does demonstrate broad areas of competency that should be able to serve as a foundation for further learning.
A psychometric evaluation showed very good test-retest reliability on the use of this older form of the EIPA (Schick, Williams, and Bolster 2000). Specifically, no significant difference was found between scores on an initial EIPA assessment and a subsequent assessment, even though the interpreting samples were different (t (17) = - 2.051, p = .056). This finding indicates that two different assessments on the same interpreter resulted in essentially the same score, even when a different sample was collected. Inter-rater reliability was also very good, with a point-by-point agreement of .78; in other words, when an interpreting sample was evaluated by a different rating team that was blind to the original rating, the interpreter’s score was essentially the same.
Although the original EIPA, which used interpreting samples from the actual classroom, showed good reliability, states wanted an assessment that was more standardized in terms of the classroom teaching. Also, because each interpreting sample was different, the deaf evaluator was at a significant disadvantage because captioning all of the incoming assessment tapes was not economically feasible. Because of these factors, the stimulus tape version of the EIPA was created.
A pilot study was conducted to compare the current version of the EIPA, which uses stimulus tapes to collect the interpreting samples, with the older version, which used the interpreter’s actual classroom lessons. Data were collected for ten interpreters on both versions. The data show that the “live” version correlated highly with the stimulus tape version, with a correlation coefficient of .94. This finding means that interpreters who did well on the “live” evaluation also did well on the stimulus tape versions and that the stimulus tape assessment predicted to a very high degree how well the interpreter would perform in the actual classroom. However, in general, the interpreters scored one level lower on the stimulus tape version. This finding is not surprising because the “live” assessment involved interpreters providing interpretation for the classrooms and children with whom they worked on a daily basis. It is commonly accepted that interpreters work best with extensive knowledge of the situation, knowledge of the people involved, and general knowledge of the content. The finding also shows that the stimulus videotapes represent challenging material that requires the interpreter to frequently and consistently demonstrate skills measured by the EIPA.
Currently, with funding from the Office of Education4 (OSEP), the authors of this chapter are conducting additional research on the new stimulus tape version of the EIPA. This research will include the collection of new psychometric data and the completion of EIPA evaluations with interpreters who have received RID certification.
HOW THE EIPA FITS INTO A MODEL OF ASSESSMENT
The EIPA uses a model of interpreting in which text must be evaluated at a discourse level. It evaluates more than grammatical structures and clarity of signing and more than breadth of vocabulary. To obtain a rating that meets many states’ minimum standards, interpreters must demonstrate that they have a sense of the entire message, including the function, not just the form. This comprehensive competence includes broad control over the production of prosody, the use of space for discourse purposes, and facial expression. This competence is required for all versions of the assessment (MCE, PSE, and ASL) because all invented sign systems claim to borrow these aspects of ASL.
What the EIPA Does Not Evaluate
Although intended as a comprehensive evaluation of the product of interpretation, the EIPA does not assess all of the areas of expertise that are essential to being a well-qualified educational interpreter. For example, the EIPA does not assess an interpreter’s performance as a member of a professional team. As they do for all professionals who work in the public school, the staff members at the school best evaluate this performance. Although most schools do not have an individual who is capable of evaluating interpreting skills, the evaluation of professional skills is different. Schools should be responsible for evaluating how well the interpreter performs as a professional, following the guidelines for other professionals in the school. For example, ethical guidelines for interpreters who work in public schools should follow, to a large extent, the same ethical guidelines for teachers. This expectation is especially true in terms of communicating with members of the child’s educational team and members of the child’s family. It also applies to decisions made with respect to interpreting, which should be made in the context of the educational team. The interpreter’s knowledge of professional roles and responsibilities will be assessed in the EIPA:WT.
In addition, many educational interpreters fulfill duties other than interpreting, for example, tutoring and aiding. The EIPA does not evaluate an interpreter’s capability to conduct these roles, although the written test that is in development will have some additional roles represented in its content domains. Again, this kind of performance is best evaluated by the local school, which often has many other individuals also fulfilling these roles.
HOW THE EIPA IS BEING USED AROUND THE COUNTRY
The EIPA is being used in various ways throughout the United States and Canada, ranging from being a legal requirement to a tool used in mentoring. What is clear is that the EIPA is a national assessment system, in that it is used extensively in numerous states, universities, and school districts.
Certification to Meet Minimal Standards
Many states require a certificate of competency for educational interpreters. Some of these states such as Colorado, Wisconsin, and Louisiana have identified the EIPA as the only form of assessment recognized for state certification-licensure, and some states such as Kansas include the EIPA as one form of evaluation that is acceptable. Table 3 summarizes state requirements or recommendations, including Canada, that involve the EIPA. The Regional Assessment System, which manages EIPA evaluatons for states, will be discussed in the next subsection.
TABLE 3 States and Canadian Provinces Using the EIPA and How It Is Being Used
Statewide Evaluation
In some states, the EIPA is being used for statewide evaluation, without minimum standards being required, often as a precursor to establishing a certification system. A good example of this approach is the state of New York (Mitchell 2002), which has been conducting an assessment program for three years, using the EIPA but with its own video stimulus materials After the samples were rated, prescriptive plans were designed for each interpreter. With this information, the interpreter knew what training he or she would need and, thus, was able to take appropriate workshops, ASL classes, or both. In addition, the interpreter could receive mentoring, if so needed and desired. The New York State Education Department is now preparing to propose certification requirements for educational interpreters in K–12 settings. Colorado also moved to establish standards in this manner; the state began offering EIPA evaluations in 1992 and, later, began requiring minimum EIPA scores to work as an educational interpreter.
Recently, a large regional effort to use the EIPA for state assessment, training, certification, and reciprocity has been initiated. This system, begun by state directors of special education in the Mountain Plains Special Education region, decided to support the use of the EIPA for their member states (Table 3 indicates which states are members). This consortium pooled funding to pilot a Regional Assessment System, with a director, so any interpreter in the consortium can access the EIPA. Currently, the Regional Assessment System includes eleven states and the Bureau of Indian Affairs.5 One of their goals is to reduce the duplication that would occur if each state established its own system, each requiring oversight, materials, and information dissemination. Each state will decide on its own how to use the EIPA, as evaluation or as certification, and what proficiency level the state will encourage or require. This model is appealing because it allows coordination across a number of states as well as the pooling of financial resources, and it reduces the duplication of work and materials. This approach may significantly affect how other states integrate the EIPA into their assessment and credentialing systems.
Measurement of Pre-Training and Post-Training for Interpreter Training Programs
Several interpreter training programs are using the EIPA to quantify the results of their training programs. For example, the Educational Interpreter Certificate Program (EICP), directed by Dr. Leilani Johnson in Denver, Colorado, uses the EIPA to determine whether candidates have sufficient skills to enter the program. The program also uses the EIPA as a post-training exit assessment to quantify the achievements of the students after the two-year program. Preliminary results, using a modified EIPA, showed that, on average, interpreters increased their EIPA scores about one level after completing the training program (Schick, Johnson, and Williams 2004). The EICP’s data underscore how much work is needed for an interpreter to advance a full EIPA level in interpreting skills. For example, an interpreter who is scoring at an EIPA level 2 has only basic sign vocabulary and makes numerous grammatical errors that significantly interfere with communication. Although an interpreter scoring at a level 3 is still making many errors and continues to require supervision, the interpreted message is generally intelligible with many major concepts present. Those in the field of interpreter training need data such as these to help document and analyze the effectiveness of different training models. In addition, the interpreter training programs at the University of Arizona and at Front Range Community College in Colorado are also using the EIPA as exit assessments.
Another creative pre-post training effort is under way in the state of Iowa. The Iowa Department of Education has provided funding to conduct a two-year study involving forty interpreters. All subjects in the study were evaluated using the EIPA (during June-July 2002). Using a professional interpreter training agency (SLICES, Minneapolis, Minn.), twenty interpreters will receive training over the next year that is specific to the weaknesses identified on each candidate’s EIPA. The remaining twenty interpreters will receive no additional training. All forty interpreters will be reevaluated at the end of the second year of the study. The intent of the study is to determine whether improvements in skills occur in those interpreters receiving specific feedback and training compared with those who do not receive this support.
A Way to Ensure Interpreting Skills during the Hiring Process
Because many school districts lack the ability to screen applicants for educational interpreting positions, the EIPA Diagnostic Center, located at Boys Town National Research Hospital, has created the Pre-Hire Screening version of the EIPA. This screening is just that; it is not a full EIPA assessment. Many school districts must have information about the skill level of the interpreter more quickly than the typical EIPA assessment procedure takes, which is about two months. Most schools can hire without any assessment information, and even in the states that require certification, schools can receive permission to hire an individual who does not meet standards, using an emergency credential, similar to all other certified professional categories. However, many school districts would prefer to have some information about the interpreter’s skills before hiring. The Pre-Hire Screening version of the EIPA can provide schools some feedback about the interpreter’s skills within seventy-two hours, with the understanding that it neither constitutes a thorough evaluation nor is an alternative to a full EIPA for those states requiring a minimum score.
Districts using the Pre-Hire Screening are advised of the overall competency of an applicant in a manner that is more general than diagnostic. The Pre-Hire Screening rates three broad categories of skills rather than determines specific numeric scores. Interpreters may receive a rating that indicates skills at the level of at least a minimum standard, indicating that the school can hire with assurance that the interpreter has skills at or above the minimum standards. The interpreter may be in a hire-with-caution zone, indicating (a) that, although the interpreter has some good skills, a full EIPA is needed to determine whether minimum standards are met and (b) that the interpreter requires a skilled mentor or supervisor. Finally, the interpreter may receive a rating indicating that hiring is not recommended because the interpreter could not meet minimum standards using a full EIPA assessment.
Like the full EIPA, the EIPA Pre-Hire Screening is designed for candidates applying for elementary or secondary positions. It features child and teen signing models using ASL-PSE and MCE (SEE II). Schools can contact the EIPA diagnostic center to request testing materials. Materials are sent overnight, and after receipt of the candidate’s screening tape, the EIPA Diagnostic Center will provide candidate results in a twenty-four-hour time frame. The Pre-Hire screening review is completed by one trained EIPA Diagnostic Center staff member, who will provide the potential employer with a cursory overview of the candidate’s performance. This service is meant to provide only additional guidance to potential employers. The EIPA Pre-Hire Screening is intended to give employers additional information with which to make a more sound hiring decision.
An Indicator of Minimal Skill Level
To date, the vast majority of states using the EIPA for credentialing purposes have adopted a level of 3.5 or above as their minimal standard. Why? To earn a level 3.5, the interpreter must score significantly above a level 3 across the majority of the thirty-seven measurements on the EIPA. Interpreters who achieve an overall level of 3.5 have broad competencies in grammar, vocabulary, and textual processing. Many states believe that an individual at this level will continue to develop skills.
Many states have adopted the approach that basically asks, What level can we expect from a graduate of an interpreter education program? Of course, it should be noted that the field of interpreter training cannot really answer this question because the field does not have a standards-based approach to curriculum similar to what you see in fields such as speech pathology. In addition, graduates of interpreter training programs do not necessarily graduate with thorough and broad interpreting skills. But again, we really do not know the skill levels of graduates because the field has developed neither standard exit criteria nor the expectation that programs will have a standardized exit evaluation. Nevertheless, for any graduate of a professional training program, we do not expect mastery skills at graduation. No one expects a recent graduate of a teacher-training program to be a master teacher, and schools expect these recent graduates to continue learning. States have tried to apply a similar concept with respect to minimum standards for educational interpreters.
A minimum standard responds to the question, What foundation of skills adequately enables the professional to reasonably function and ensures that the individual has competencies that are sufficiently broad to allow for further development? This concept may be difficult for many of us who believe that the standard should be the highest possible because we understand the ramifications that the interpreter’s inability to express classroom content has on a child’s development. Many states have articulated the hope that these established EIPA standards are the initial step.
Because the profession of educational interpreting is relatively new, the low supply of educational interpreters is nearly at a crisis level. This situation is exacerbated by the fact that there is also a lack of interpreter education programs specifically geared toward working with children in educational settings. A minimum standard of 3.5 on the EIPA seems to many states to be a realistic compromise between requiring no skills and requiring what those of us who understand child development and education would actually like to see. A reality in most states is that school districts can get emergency certification for a less-than-skilled interpreter. It is possible within many state systems to continue a revolving door in using interpreters who are not certified, especially if schools within those states quite legitimately cannot find an interpreter who meets minimum standards. In addition, school districts have been found to rename an interpreter’s position to avoid meeting minimum standards. Realistic standards for schools quite possibly allow the standards to be more consistently met.
However, with the passage of federal legislation in 2002 called the No Child Left Behind Act of 2001, states may have to address the real issue in educational interpreting, that is, whether the student is making adequate progress. NCLB requires that states show that all children are achieving, with no exemptions for children with special needs. Schools are being held accountable for student achievement. The focus may soon change from determining what minimum standards we can expect for an educational interpreter to determining whether the child is able to learn and achieve with an educational interpreter who meets a state’s minimum standards. However, what is currently frustrating for the field is that, although NCLB specifies criteria to determine whether teachers and paraeducators are qualified (highly qualified in the case of teachers), the legislation does not describe what determines whether a educational interpreter is qualified.
In addition to specifying minimum standards in interpreting performance, some states have established means by which skill training and general education are designed and provided for educational interpreters. For example, Colorado requires that a minimum of sixty contact hours of continuing education be completed every five years, which must be a combination of skills and knowledge education. In Nebraska, for example, interpreters must annually accrue seventy-five clock hours in CEUs within five years to maintain state interpreting credentials.
SUMMARY
The EIPA and soon-to-be completed EIPA:WT provide an excellent resource to states, schools, and parents to help determine whether an educational interpreter is qualified. The performance-based evaluation is ecologically valid in that it uses authentic classroom teaching and children signers to elicit an interpreting sample. Interpreters can be assessed at either the elementary or the secondary level, using ASL, PSE, or MCE. A previous version of the EIPA had good psychometric validity, and the current instrument is undergoing a psychometric evaluation. States and school districts can use the EIPA to ensure that a child has access to the majority of classroom content. Although providing a qualified educational interpreter does not mean that the child receives an education equivalent to what his or her hearing peers receive, it does mean that the child has access to much of the classroom interaction.
For parents, even in states that do not require minimum standards, the EIPA provides a means of an independent assurance that their child’s interpreter is capable in representing classroom content. In many cases, specifying an EIPA evaluation of the interpreter on the child’s IEP may be one means of determining qualifications when the state or school district has no minimum qualifications. For interpreter training programs, the EIPA can be an important aspect of program evaluation. For other types of training programs, it can provide information about the effectiveness of the training.
More states are requiring minimum standards, and the hope is that, in the next decade, all will. The No Child Left Behind Act may be the impetus for states to make sure that basic communication access is provided. However, even more than a decade ago in their report to Congress, the Commission on Education of the Deaf (1988) stated that the IDEA requires that “deaf students be integrated into regular classroom settings to the maximum extent possible, but if quality interpreting services are not provided, that goal becomes a mockery” (Commission on Education of the Deaf 1988, 103). We would add to this concept that without an independent, psychometrically valid assessment that is designed to assess what educational interpreters do, it is impossible for a school to say that a deaf child has access.
Profile of Skills at Each Rating Level of the EIPA
LEVEL 1: BEGINNER
Demonstrates very limited sign vocabulary with frequent errors in production. At times, production may be incomprehensible. Grammatical structure tends to be nonexistent. Individual is able to communicate only very simple ideas and demonstrates great difficulty comprehending signed communication. Sign production lacks prosody and use of space for the vast majority of the interpreted message. An individual at this level is not recommended for classroom interpreting.
LEVEL 2: ADVANCED BEGINNER
Demonstrates only basic sign vocabulary, and these limitations interfere with communication. Lack of fluency and sign production errors are typical and often interfere with communication. The interpreter often hesitates in signing, as if searching for vocabulary. Frequent errors in grammar are apparent, although basic signed sentences appear intact. More complex grammatical structures are typically difficult. Individual is able to read signs at the word level and simple sentence level, but complete or complex sentences often require repetitions and repairs. Some use of prosody and space is evident, but use is inconsistent and often incorrect. An individual at this level is not recommended for classroom interpreting.
LEVEL 3: INTERMEDIATE
Demonstrates knowledge of basic vocabulary but may lack vocabulary for more technical, complex, or academic topics. Individual is able to sign in a fairly fluent manner using some consistent prosody, but pacing is still slow with infrequent pauses for vocabulary or complex structures. Sign production may show some errors but generally will not interfere with communication. Grammatical production may still be incorrect, especially for complex structures, but is, in general, intact for routine and simple language. Individual comprehends signed messages but may need repetition and assistance. Voiced translation often lacks depth and subtleties of the original message. An individual at this level would be able to communicate very basic classroom content but may incorrectly interpret complex information, resulting in a message that is not always clear. An interpreter at this level needs continued supervision and should be required to participate in continuing education in interpreting.
LEVEL 4: ADVANCED INTERMEDIATE
Demonstrates broad use of vocabulary with sign production generally correct. Demonstrates good strategies for expressing information when a specific sign is not in his or her vocabulary. Grammatical constructions are generally clear and consistent, but complex information may still pose occasional problems. Prosody is good, with appropriate facial expression most of the time. Individual may still have difficulty with the use of facial expression in complex sentences and adverbial nonmanual markers. Fluency may deteriorate when rate or complexity of communication increases. Individual uses space consistently most of the time, but complex constructions or extended use of discourse cohesion may still pose problems. Comprehension of most signed messages at a normal rate is good, but translation may lack some complexity of the original message. An individual at this level would be able to express much of the classroom content but may have difficulty with complex topics or rapid turn-taking.
LEVEL 5: ADVANCED
Demonstrates broad and fluent use of vocabulary, with a broad range of strategies for communicating new words and concepts. Sign production errors are minimal and never interfere with comprehension. Prosody is correct for grammatical, non-manual markers and for affective purposes. Complex grammatical constructions are typically not a problem. Comprehension of signed messages is very good, communicating all details of the original message. An individual at this level is capable of clearly and accurately expressing the majority of interactions within the classroom.
NOTES
1. Common usage capitalizes the word Deaf to refer to a cultural identity rather than an audiological measure.
2. PSE is often referred to as Contact Signing (Lucas and Valli 1989).
3. Currently, the RID and NAD are working collaboratively to develop a new certification test. NAD is no longer supporting its evaluation system.
4. Office of Special Education, Programs of National Significance grant (H325 N010013), awarded to Brenda Schick and Kevin Williams.
5. See http://web.jcc.net/academic/ras for more information about the Regional Assessment System.
REFERENCES
Bornstein, H., ed. 1990. Manual communication: Implications for education. Washington D.C.: Gallaudet University Press.
Commission on the Education of the Deaf. 1988. Toward equality: Education of the Deaf. Washington, D.C.: U.S. Government Printing Office.
Fernald, A. 1989. Intonation and communicative intent in mothers’ speech to infants: Is the melody the message? Child Development 60(6):1497–1510.
Fernald, A., and G. McRoberts. 1996. Prosodic bootstrapping: A critical analysis of the argument and the evidence. In Signal to syntax: Bootstrapping from speech to grammar in early acquisition, ed. J. L. Morgan and K. Demuth, 365–88. Hillsdale, N.J.: Erlbaum.
Individuals with Disabilities Education Act of 1990, 20 U.S. Code, Ch. 33, Secs. 1400–1491, Pub. L. 10517, 1997.
Johnson, R. E., S. K. Liddell, and C. J. Erting. 1989. Unlocking the curriculum: Principles for achieving access in deaf education. Gallaudet Research Institute Working Paper 89-3. Washington, D.C.: Gallaudet Research Institute, Gallaudet University.
Kantor, R. 1982. Communicative interaction: Mother modification and child acquisition of American Sign Language. Sign Language Studies 36:233–82.
Kemler-Nelson, D. G., K. Hirsh-Pasek, P. W. Jusczyk, and K. Wright-Cassidy. 1989. How the prosodic cues in motherese might assist language learning. Journal of Child Language 16(1):55–68.
Lucas, C., and C. Valli. 1989. Language contact in the American deaf community. In The sociolinguistics of the deaf community, ed. C. Lucas, 11–40. San Diego: Academic Press.
Masataka, N. 1992. Motherese in a signed language. Infant Behavior and Development 15(4):453–60.
Mitchell, M. K. 2002. Statewide training of educational interpreters: How is this possible? In Proceedings of the Seventeenth National Conference of the Registry of Interpreters for the Deaf, 83–117. Alexandria, Va.: RID Publications.
Moore, D. S., M. J. Spence, and G. S. Katz. 1997. Six-month-olds’ categorization of natural infant-directed utterances. Developmental Psychology 33:980–89.
No Child Left Behind Act of 2001, Pub. L. 107–110, 115 Stat. 1425 (2002).
Reilly, J. S., and U. Bellugi. 1996. Competition on the face: Affect and language in ASL motherese. Journal of Child Language 23:219–36.
Schick, B. 1997. The effects of discourse genre on language complexity in school-aged deaf students. Journal of Deaf Studies and Deaf Education 2:234–51.
Schick, B., L. Johnson, and K. Williams. 2004. Look who’s being left behind: Deaf children with interpreters in the public schools. Paper presented at the Office of Education Personnel Preparation Conference, 24 April, Washington, D.C.
Schick, B., and K. T. Williams. 1992. The educational interpreter performance assessment: A tool to evaluate classroom performance. Paper presented at the conference on Issues in Language and Deafness: The Use of Sign Language in Educational Settings: Current Concepts and Controversies, Omaha, Nebraska.
Schick, B., K. Williams, and L. Bolster. 2000. Skill levels of educational interpreters working in the public schools. Journal of Deaf Studies and Deaf Education 4:144–55.
Supalla, S. 1992. Equality in educational opportunities: The deaf version. In A free hand: Enfranchising the education of deaf children, eds. M. Walworth, D. F. Moores, and T. J. O’Rourke, 170–81. Silver Spring, Md.: T.J. Publishers.
Woodward, J., and T. Allen. 1988. Classroom use of artificial Manual English sign systems by teachers. Sign Language Studies 55:60.