Skip to main content

Advances in Educational Interpreting: 1 The Impact of Sign Language Interpreter Skill on Education Outcomes in K–12 Settings

Advances in Educational Interpreting
1 The Impact of Sign Language Interpreter Skill on Education Outcomes in K–12 Settings
  • Show the following:

    Annotations
    Resources
  • Adjust appearance:

    Font
    Font style
    Color Scheme
    Light
    Dark
    Annotation contrast
    Low
    High
    Margins
  • Search within:
    • Notifications
    • Privacy
  • Project HomeAdvances in Educational Interpreting
  • Projects
  • Learn more about Manifold

Notes

table of contents
  1. Cover
  2. Title
  3. Copyright
  4. Contents
  5. Contributors
  6. Introduction
  7. Part One Outcomes and Impacts on Deaf Students in Mediated Education
    1. 1 The Impact of Sign Language Interpreter Skill on Education Outcomes in K–12 Settings
    2. 2 A Native-User Approach: The Value of Certified Deaf Interpreters in K–12 Settings
    3. 3 Interpreting and Language Access: Spoken Language Interpreters in U.S. Educational Contexts
    4. 4 Interpreting for Deaf and Hard of Hearing Emergent Signers in Academia
  8. Part Two Educational Interpreters—Strategies and Repertoires for the Classroom
    1. 5 The Sociological Organization of K–12 Educational Interpreting by the Individualized Educational Program
    2. 6 Communication Considerations and Relational Dialectical Tensions Experienced by Educational Interpreters
    3. 7 Preparation Strategies Used by Interpreters in Educational Settings: An Intervention Study
    4. 8 No Two Interpretations Are Alike: A Study of Constructed Meaning in English to American Sign Language Interpretations in Education
    5. 9 The Effects of Negative Thought Patterns on Sign Language Interpreters and Their Work
    6. 10 K–12 Educational Interpreters’ Strategies to Support Deaf Refugee and Immigrant Students
    7. 11 Interpreters in the Postsecondary Setting: Online Professional Development
  9. Part Three A Paradigm Shift—Reenvisioning the Roles, Responsibilities, and Qualifications of “Educational Interpreters”
    1. 12 Educational Interpreters: Facilitating Communication or Facilitating Education?
    2. 13 Interpreters Collaborating in K–12 Education
    3. 14 The Realistic Role Metaphor for Educational Interpreters
    4. 15 Debunking the Myths of American Sign Language in Academic Settings
    5. 16 There Is No I(nterpreter) in Your Team
    6. 17 Signed Language Interpreters in Education: Perspectives on Their Role in Deaf and Hard of Hearing Students’ Educational Placement
  10. Index

1

The Impact of Sign Language Interpreter Skill on Education Outcomes in K–12 Settings

Deborah Cates and Julie Delkamiller

Roughly 86% of deaf students in school are educated in mainstream settings, and at least 14% of those students use sign language interpreters in their classrooms (Gallaudet Research Institute, 2011). Per federal law, the Individuals With Disabilities Education Act (IDEA), Section 300.34, sign language interpreters are related service providers just as are speech-language pathologists, audiologists, psychologists, occupational therapists, counselors, and orientation and mobility specialists. The burden placed on states for the recruitment, training, and support of qualified related service providers includes interpreters.

The primary instrument for measuring sign language interpreter qualification to work in educational settings is the Educational Interpreter Performance Assessment (EIPA) offered by Boys Town National Research Hospital. It gives interpreters numerical indices on a scale of 0–5 for 37 different skill areas that are averaged together for an overall score ranging from 0 to 5 rounded to the nearest 10th of a point (Schick & Williams, 2004). The EIPA may be taken with elementary- or secondary-level stimuli in one of three target sign forms: Manually Coded English (MCE), Pidgin Signed English (PSE), and American Sign Language (ASL). MCE refers to systems of signs that are intended to be visual analogs of English, containing as much of the structure and morphology of English as possible (Schick, 2003). PSE is a contact variety of ASL and English with lexical, semantic, and pragmatic components of ASL combined with syntactic components of English (Woodward, 1973). ASL is a natural human language with its own complex phonology, morphology, syntax, semantics, and pragmatics. It is the language of the Deaf community in the United States and parts of Canada.

The numerical score indices on the EIPA equate with different skill level descriptions: beginner (Level 1), advanced beginner (Level 2), intermediate (Level 3), advanced intermediate (Level 4), and advanced (Level 5) (see www.classroominterpreting.org for complete descriptions of each level). At an EIPA score of 3.0, interpreters convey roughly 60–70% of the information present in the source language (Langer, 2007; Schick, 2004). Research indicates that this information is at the lexical or phrasal level with significant deficits in the representation of prosodic, pragmatic, and discourse-level information (Cates, 2021; Schick et al., 1999, 2005). For a rudimentary figure of what 60–70% of information may look like without prosodic, pragmatic, and discourse-level information, see Figures 1 and 2. Figure 1 is a word cloud, where all of the words of this chapter thus far are included. From this cloud, one can see what this chapter is about and some of the ideas expressed in it, but one is left to guess at the connections between the words. Figure 2 is a complex image with 40% of it randomly covered. The original image is in Figure 3. These images represent a rough facsimile of material interpreted with an EIPA score of 3.0.

image

Figure 1. Word cloud for this chapter.

image

Figure 2. Complex image with 40% covered.

States have individual requirements for educational interpreter skill levels that range from no national exam required to an EIPA score of 4.0 or better (Johnson et al., 2014). The National Association of Interpreters in Education recommends in their guidelines that educational interpreters have a minimum score of 4.0 on the EIPA (National Association of Interpreters in Education, 2019). As of 2021, a total of 42 states have established an EIPA requirement, of which 34 states require an EIPA score of 3.5 or 4.0 (National Association of Interpreters in Education, 2021). On average, about one- third of interpreters who take the EIPA score 3.5 or higher, but fewer than 20% score 4.0 or higher (Schick et al., 2005). This achievement rate has been relatively stable for more than a decade (Cates, 2021; Johnson et al., 2018). States may also have special permits or waivers for filling interpreter needs in schools that allow educational interpreters to work for a time without meeting the minimum EIPA or other credentialing standards. These educational interpreters often work in elementary settings with young children who do not yet have fully developed language, resulting in the least skilled interpreters working with the most vulnerable students (Schick et al., 2005).

image

Figure 3. Complex image not covered.

Although states use the EIPA to establish minimum requirements for interpreters, there are no studies to date that explore how educational interpreters’ skill level as measured by the EIPA affects student learning. This question is more complex than simply connecting the amount of information present in an interpretation with that present on an exam. Educational interpreters are not the only avenue of access for deaf children in the classroom. They may have access to some spoken language in their environment, visual printed materials, kinesthetic learning activities, peer assistance, and more (see Marschark et al., 2004; Smith, 2010). Furthermore, deaf students have a great deal of variation in their linguistic and cognitive capabilities (Marschark & Hauser, 2012), such that educational interpreter skills will not necessarily impact two different students in the same way. However, studies of direct versus interpreted education consistently show that students as a whole do better in environments with direct instruction (Kurz, 2004; Kurz et al., 2015; Marschark et al., 2005) unless their instructors have experience teaching deaf students and have highly qualified educational interpreters (Marschark et al., 2008). These studies all use nationally certified sign language interpreters, many of them also native users of ASL with many years of work experience. They represent the highest quality interpreters in the field, which is not reflective of the skill level typical for educational interpreters (Cates, 2021; Schick et al., 1999, 2005; Witter-Merithew & Nicodemus, 2012). Furthermore, all but Kurz’s studies assess student learning with deaf university students, who are arguably not representative of the population of deaf students in K–12 settings. Therefore, the current study assesses how educational interpreter skill level as measured by the EIPA affects student learning in a K–12 setting.

In addition to a lack of research on the effects of educational interpreter skill on student learning outcomes, there is also a lack of research on how interpreter training programs (ITPs) prepare interpreters to work in K–12 settings, even though most interpreters will work in educational settings over the course of their careers (Cogen & Cokely, 2015). Approximately 50% of ITP graduates work in schools full-time, and about 30% of ITP graduates work in schools immediately upon graduation, even though the average gap from graduation to national certification is 2 years with a bachelor’s degree and 3 years with an associate’s degree (Anderson & Stauffer, 1990; Cogen & Cokely, 2015; Humphrey & Alcorn, 2007; Janzen, 2006; Patrie, 1994; Schafer & Cokely, 2016; Walker & Shaw, 2011). Even though interpreting for children is not the same as interpreting for adults, ITPs do not typically prepare interpreters specifically for interpreting work in educational settings (Cogen & Cokely, 2015; Schick et al., 1999, 2005; Smith, 2010; Witter-Merithew & Nicodemus, 2012). There are some ITPs with a designated emphasis on interpreting in educational settings, but such programs are the exception; a rough internet search of ITPs returned just seven colleges and universities that have an emphasis in educational interpreting out of dozens of programs. Furthermore, as previously mentioned, fewer than 20% of interpreters who take the assessment achieve an EIPA score of 4.0 or higher; as the only valid and reliable national exam for interpreting in educational settings, this low percentage is worrisome. It is clear that many educational interpreters are not highly skilled practitioners. It is critical to better understand how educational interpreter skills influence student learning outcomes.

To address this need, the current study compares the impact of different service delivery models on deaf student learning outcomes. Data were collected from the same group of deaf students under four learning conditions: a lecture in English with an educational interpreter who achieved an EIPA score of 4.0 within 12 months prior to data collection, a lecture in English with an educational interpreter who achieved an EIPA score of 3.0 within 12 months prior to data collection, a lecture in ASL from a credentialed teacher of the deaf, and a lecture from a credentialed teacher of the deaf who used Simultaneous Communication (SimCom; speaking and signing at the same time). In all conditions, students took a pretest and a posttest. The pretest and posttest were not identical, but they had paired questions that were asking for similar information in similar ways. Student learning was assessed by comparing scores on the pretest with those on the posttest. If a student scored incorrectly on the pretest but correctly on the posttest for a paired item, it was considered learning. Each student was compared with themselves across conditions to assess the impact of service delivery on their learning.

Methods

For this experiment, a group of deaf students moved through four conditions: direct instruction in ASL from a credentialed teacher of the deaf (C-ASL), direct instruction in SimCom from a credentialed teacher of the deaf (C-SIM), instruction in spoken English with an educational interpreter scoring a 4.0 on the EIPA (C-4.0), and instruction in spoken English with an educational interpreter scoring a 3.0 on the EIPA (C-3.0).

Participants

Approved study recruitment documents and permission forms were sent to the parents of middle and high school students at a state school for the deaf. Six students were approved for participation in this study. Their ages range from 12 to 17. Three are male, and three are female. All six students have a minimum of 5 years of experience using sign language, and all six utilize some form of assistive listening device—three use hearing aids and three have cochlear implants (Table 1).

Two educational interpreters were recruited via email through Training and Assessment Systems for K–12 Educational Interpreters (TASK-12). Both educational interpreters had taken the EIPA within 12 months. Both have bachelor’s degrees in areas other than physical science, and both work full-time interpreting in K–12 settings. One educational interpreter scored an EIPA 4.0, and the other an EIPA 3.0. The average scores for both educational interpreters in each of the four EIPA domains were within the average range for interpreters at those levels on the EIPA (see Cates, 2021), indicating that they are representative of typical interpreters at those EIPA skill levels.

Two instructors were recruited from the same state school for the deaf. Both teachers have normal hearing and are second-language learners of ASL, which is reflective of the majority of teachers for the deaf (Simms et al., 2008). Both are credentialed teachers of the deaf with endorsements in science. The instructor for the C-ASL condition (teacher using ASL) has 3 years of teaching experience and is also a certified sign language interpreter with a score of Advanced Plus on the Sign Language Proficiency Interview, which indicates that he has near-native like ASL. The instructor for the C-SIM (teacher signing and speaking at the same time), C-4.0 (interpreted lecture with EIPA 4.0), and C-3.0 (interpreted lecture with EIPA 3.0) conditions has 40 years of teaching experience. Instructors were given lesson plans prior to the data collection sessions, but did not see the tests in advance.

Table 1. Participant Characteristics

image

Note. CI = cochlear implant; F = female; HA = hearing aid; M = male.

Selection of Materials

Tests

In order to assess student learning in each condition, multiple-choice pre- and posttests with 10 questions per test were used. Questions were drawn from various released state standardized tests, all at the fifth-grade level in physical science. Questions were organized into topical groups so that lesson plans could be focused on a particular topic. The four topics were states of matter, circuits, types of energy, and conduction. The questions on the pre- and posttest for each condition were not identical, but a subset of the pre- and posttest questions were paired so that they asked for similar information in a similar way (see Table 2 for an example of paired questions and Table 3 for the number of paired questions per condition). These paired questions were used to assess student learning. If a student scored incorrectly on the pretest but correctly on the posttest paired item, it was considered as 1 point of learning. Aggregate scores for each student on the pre- and posttest were used to determine whether the students were able to read at a level appropriate to answer the questions; if they did not score above chance on any of the tests, they were removed from the analysis. This resulted in the removal of one student.

Lesson Plans

Lesson plans were developed by one of the coauthors, who is also a credentialed teacher of the deaf, and follow the “5E” structure: engage, explore, explain, elaborate, and evaluate (Bybee et al., 2006). Each lesson plan had an opening activity, observations, discussion, explanation, and then several challenging questions followed by a brief review prior to the posttest. All lessons included a hands-on lab portion. Lesson plans were created to ensure that the material in the multiple-choice tests was covered in the lesson.

Table 2. Sample Paired Questions

PretestPosttest
Copper wire is often wrapped in plastic. Plastic material is a good ___What material would be safest to use as an insulator to cover electrical wires?
a. electromagneta. aluminum
b. insulatorb. tin
c. circuitc. rubber
d. currentd. water

Table 3. Number of Paired Questions per Condition

ConditionNumber of paired questions
C-ASL8
C-4.05
C-3.08
C-SIM6

Note. C-3.0 = instruction in spoken English with an educational interpreter scoring a 3.0 on the EIPA; C-4.0 = instruction in spoken English with an educational interpreter scoring a 4.0 on the EIPA; C-ASL = direct instruction in American Sign Language from a credentialed teacher of the deaf; C-SIM = direct instruction in Simultaneous Communication from a credentialed teacher of the deaf; EIPA = Educational Interpreter Performance Assessment.

Procedure

The four lessons were randomly assigned to a study condition. Condition C-ASL was a lesson on types of energy; C-4.0 was a lesson on circuits; C-3.0 was a lesson on conductors; C-SIM was a lesson on states of matter. Data collection took place over the course of four sessions spread over 4 weeks. Each session was at the same time of day on a Wednesday afternoon. Cameras on the teacher, students, and educational interpreter recorded the entirety of each session. Students took a 10-question pretest on the material that would be presented in the lesson that day. Following the pretest, the teacher taught the lesson. In the C-ASL condition, the teacher taught in ASL. In the C-SIM condition, the teacher taught in spoken English and sign language simultaneously. In the C-4.0 condition, the teacher taught in English with an EIPA 4.0 educational interpreter. In the C-3.0 condition, the teacher taught in English with an EIPA 3.0 educational interpreter. Following the lesson, students took a 10-question posttest. Researchers were in the room for the pretest and posttest, but not during the lesson. Following the posttest, a native signing deaf adult interviewed the students as a group about their experiences during the study condition. This interview was recorded, but the teacher and researchers were not in the room at the time. Following each condition, pretests and posttests were scored. Total scores and scores by test item were calculated and recorded for each student.

Results

Raw scores on the pre- and posttests across students as well as number of items showing learning are reported in Tables 4 and 5. Student 3 was absent for C-3.0, and Student 4 was absent for C-ASL; so the respective cells for those conditions are blank.

As shown in Table 4, raw scores on posttests increased from scores on pretests for three of the four students present for the C-ASL condition, for one of the five present for the C-SIM condition, for three of the five present for the C-4.0 condition, and for none of the four present for the C-3.0 condition. In fact, all four students showed decreases in their raw scores in the C-3.0 condition. This indicates that students performed better on the whole with direct instruction in ASL or with an educational interpreter with an EIPA score of 4.0 than with direct instruction in SimCom or with an educational interpreter with an EIPA score of 3.0. However, raw scores alone are not sufficient to reflect learning because there are any number of other factors that could have affected how the students took each test.

Table 4. Raw Scores by Subject

image

Note. C-3.0 = instruction in spoken English with an educational interpreter scoring a 3.0 on the EIPA; C-4.0 = instruction in spoken English with an educational interpreter scoring a 4.0 on the EIPA; C-ASL = direct instruction in American Sign Language from a credentialed teacher of the deaf; C-SIM = direct instruction in Simultaneous Communication from a credentialed teacher of the deaf; EIPA = Educational Interpreter Performance Assessment.

To determine what, and if, students learned in each classroom condition, a subset of the test questions from the pretest and posttest in each condition were further analyzed. These questions were very similar in what they were asking and were phrased in similar ways, such that if a student could correctly answer one, that student should be able to correctly answer the other. If a student scored incorrectly on the pretest question but correctly on the posttest paired question, that was counted as 1 point of learning. There is an 18% chance that that pattern could emerge randomly because each question had four multiple-choice responses (0.75 × 0.25 = 0.1875), so learning had to be exhibited on two or more questions (0.1875 × 0.1875 = 0.035) or multiple students would have had to show learning on the same item in order for it to be considered statistically significant (p < 0.05). Evidence of learning that fits the criteria is marked with an asterisk in Table 5. As shown in Table 5, three of the four students present for C-ASL showed statistically significant learning, two of the five for C-SIM and C-4.0, and none for C-3.0.

When both raw scores and learning scores are taken into account, students performed best with direct instruction in ASL, then with an EIPA 4.0 educational interpreter, and then with direct instruction through SimCom. Students did not exhibit learning by either measure with an EIPA 3.0 educational interpreter.

Individual students differed in their learning across conditions. Two students demonstrated learning in multiple conditions. Two of them exhibited learning in only one condition, and one did not exhibit learning in any condition. Student background information with learning results added is provided in Table 6. Although the sample size is small, there do not appear to be any effects of assistive technology or years of signing on student learning in different conditions. Interestingly, both of the bilaterally implanted students learned the most in the direct ASL condition.

Table 5. Learning by Subject

SubjectC-ASLC-SIMC-4.0C-3.0
13*2*1*1
21  1  0  1
31*1*1  nd
4nd0  1*0
64*0  0  1

Note. *p < .05. C-3.0 = instruction in spoken English with an educational interpreter scoring a 3.0 on the EIPA; C-4.0 = instruction in spoken English with an educational interpreter scoring a 4.0 on the EIPA; C-ASL = direct instruction in American Sign Language from a credentialed teacher of the deaf; C-SIM = direct instruction in Simultaneous Communication from a credentialed teacher of the deaf; EIPA = Educational Interpreter Performance Assessment; nd = no data.

Table 6. Participant Background and Learning

image

Note. C-3.0 = instruction in spoken English with an educational interpreter scoring a 3.0 on the EIPA; C-4.0 = instruction in spoken English with an educational interpreter scoring a 4.0 on the EIPA; C-ASL = direct instruction in American Sign Language from a credentialed teacher of the deaf; C-SIM = direct instruction in Simultaneous Communication from a credentialed teacher of the deaf; CI = cochlear implant; EIPA = Educational Interpreter Performance Assessment; F = female; HA = hearing aid; M = male; nd = no data.

Discussion

The results of this study indicate that deaf students learn best in an environment with direct instruction in ASL from a licensed and credentialed teacher of the deaf. Students can learn in an environment with simultaneous communication or with an educational interpreter, but the educational interpreter must be qualified. These results strongly suggest that interpreters with an EIPA score of 3.0 should not be allowed to work in a classroom. Replication of this study with an interpreter who has an EIPA score of 3.5 is necessary to assess the wisdom of setting 3.5 as the minimum qualifying score for educational interpreters.

The design of this research study is valid for identifying differences in student learning across conditions, even with a small sample. Deaf student variation is controlled by comparing each student against themselves across conditions. Reading comprehension is controlled by using aggregate test scores to ensure students can score above chance on a multiple-choice test at the fifth-grade reading level. Educational interpreter skill level is controlled by selecting interpreters with the same level of education and EIPA test scores not more than 1 year old. These interpreters’ EIPA score spreads are within average ranges in each tested area for their overall score level, ensuring that they represent the average skills of interpreters at their overall score level. Content variation is controlled by using released standardized test questions at the same grade level, with questions clustered around specific topics in the same general field without being scaffolded on one another. Teacher variation is controlled by providing structured lesson plans and having the same teacher teach all of the spoken English lessons.

The current study has a few limitations. The first is the small sample size. This study will need to be replicated and expanded in order for the results to be generalizable or to see patterns or trends across students with similar backgrounds. The second is that not all students attended every session due to scheduling conflicts. Due to the nature of the study design, the comparisons for these individuals are incomplete. The third is that the authors do not have complete language profiles on all students. In future studies, more background information should be collected and language test scores should be requested. The fourth is that student reading levels were not obtained prior to the study. Although the authors used aggregate test scores to ensure students could answer above chance on the multiple-choice tests, it would have been beneficial to obtain reading levels from students prior to their participation. Another option would be to lower the reading level of the test questions or to provide a formal sign translation of the test questions to avoid confounds from reading comprehension. The fifth limitation is the variation in paired test items across conditions. The tests used for content assessment in the future should have all 10 questions paired on the pretest and posttest for all assessments. However, the two conditions with the greatest disparity in learning, C-ASL and C-3.0, each had eight of the ten questions paired, so there is confidence that this result is not due to differences in the number of paired questions available for analysis.

Conclusion

This study has strong implications for educational interpreters. As discussed earlier, state standards for educational interpreters vary from no skill requirement to an EIPA score of 4.0+. The majority of states have a requirement of an EIPA 3.5 or higher, and five states have no requirement (National Association of Interpreters in Education, 2021), even though sign language interpreters are specifically mentioned in IDEA law as related service providers. This study is the first of its kind, showing that interpreter skill level as measured by the EIPA directly impacts student learning outcomes. Furthermore, this study indicates that educational interpreters should have an EIPA score of 4.0+ to be able to provide effective services, although they are still not as effective as direct instruction in ASL. Educational interpreters who have not achieved an EIPA score of 4.0 should be working under the supervision of a qualified mentor interpreter, and states should be raising the bar for educational interpreter standards. To do less than this would mean that deaf students do not have access to a free and appropriate public education.

References

Anderson, G., & Stauffer, L. (1990). Identifying standards for the training of interpreters for deaf people. University of Arkansas Rehabilitation Research and Training Center on Deafness and Hearing Impairment.

Bybee, R., Taylor, J., Gardner, A., Van Scotter, P., Powell, J., Westbrook, A., & Landes, N. (2006). The BSCS 5E instructional model: Origins and effectiveness. BSCS, 5, 88–98.

Cates, D. (2021). Patterns in EIPA test scores and implications for interpreter education [Manuscript under review].

Cogen, C., & Cokely, D. (2015). Preparing interpreters for tomorrow: Report on a study of emerging trends in interpreting and implications for interpreter education. National Interpreter Education Center.

Gallaudet Research Institute. (2011). Regional and national summary report of data from the 2009–10 Annual Survey of Deaf and Hard of Hearing Children and Youth. Gallaudet Research Institute, Gallaudet University.

Humphrey, J., & Alcorn, B. (2007). So you want to be an interpreter? An introduction to sign language interpreting. H & H Publishing.

Janzen, T. (2006). Visual communication: Signed language and cognition. In G. Kristiansen, M. Achard, R. Dirven, & F. Ibanez (Eds.), Cognitive linguistics: Current applications and future perspectives (pp. 359–378). Mouton de Gruyter.

Johnson, L., Taylor, M., Schick, B., Brown, S., & Bolster, L. (2018). Complexities in educational interpreting: An investigation into patterns of practice. Interpreting Consolidated.

Kurz, K. (2004). A comparison of deaf children’s learning in direct communication versus an interpreted environment [Unpublished doctoral dissertation]. University of Kansas, Lawrence.

Kurz, K., Schick, B., & Hauser, P. (2015). Deaf children’s science content learning in direct instruction versus interpreted instruction. Journal of Science Education for Students With Disabilities, 18(1), 23–37. https://doi.org/10.14448/jsesd.07.0003

Langer, E. (2007). Classroom discourse and interpreted education: What is conveyed to deaf elementary school students (Order no. 3256442J) [Doctoral dissertation, University of Colorado at Boulder]. ProQuest Dissertations and Theses Global.

Marschark, M., & Hauser, P. (2012). How deaf students learn. Oxford University Press.

Marschark, M., Sapere, P., Convertino, C., & Pelz, J. (2008). Learning via direct and mediated instruction by deaf students. Journal of Deaf Studies and Deaf Education, 13(4), 546–561. https://doi.org/10.1093/deafed/enn014

Marschark, M., Sapere, P., Convertino, C., & Seewagen, R. (2005). Access to postsecondary education through sign language interpreting. Journal of Deaf Studies and Deaf Education, 10(1), 38–50. https://doi.org/10.1093/deafed/eni002

Marschark, M., Sapere, P., Convertino, C., Seewagen, R., & Maltzen, H. (2004). Comprehension of sign language interpreting: Deciphering a complex task situation. Sign Language Studies, 41(4), 345–368. https://doi.org/10.1353/sls.2004.0018

National Association of Interpreters in Education (NAIE). (2019). Professional guidelines for interpreting in educational settings. https://www.naiedu.org/guidelines

National Association of Interpreters in Education (NAIE). (2021, January 1). State standards. https://www.naiedu.org/state-standards/

Patrie, C. (1994). The readiness to work gap. In E. Winston (Ed.), Mapping our course: A collaborative venture. Proceedings of the Tenth National Convention of Interpreter Trainers (pp. 53–56). CIT.

Schafer, T., & Cokely, D. (2016). Report on the national needs assessment initiative: New challenges-needed challenges. National Interpreter Education Center at Northeastern University. http://www.interpretereducation.org/wp-content/uploads/2014/02/Final-K-12-Interpreter-Report-12-20.pdf

Schick, B. (2003). The development of American Sign Language and manually coded English systems. In M. Marschark & P. Spencer (Eds.), Oxford handbook of deaf studies, language, and education (pp. 219–231). Oxford University Press.

Schick, B. (2004). How might learning through an educational interpreter influence cognitive development. In E. A. Winston (Ed.), Educational interpreting: How it can succeed (pp. 73–87). Gallaudet University Press.

Schick, B., & Williams, K. (2004). The Educational Interpreter Performance Assessment: Current structure and practices. In E. A. Winston (Ed.), Educational interpreting: How it can succeed (pp. 186–205). Gallaudet University Press.

Schick, B., Williams, K., & Bolster, L. (1999). Skill levels of educational interpreters working in public schools. Journal of Deaf Studies and Deaf Education, 4(2), 144–155. https://doi.org/10.1093/deafed/4.2.144

Schick, B., Williams, K., & Kupermintz, H. (2005). Look who’s being left behind: Educational interpreters and access to education for deaf and hard-of-hearing students. Journal of Deaf Studies and Deaf Education, 11(1), 3–20. https://doi.org/10.1093/deafed/enj007

Simms, L., Rusher, M., Andrews, F., & Coryell, J. (2008). Apartheid in deaf education: Examining workforce diversity. American Annals of the Deaf, 153(4), 384–395. https://doi.org/10.1353/aad.0.0060

Smith, M. B. (2010). More than meets the eye: Revealing the complexities of K–12 interpreting (Order no. 3404359) [Doctoral dissertation, University of California, San Diego]. ProQuest Dissertations and Theses Global.

Walker, J., & Shaw, S. (2011). Interpreter preparedness for specialized settings. Journal of Interpretation, 21(1), 96–108. https://digitalcommons.unf.edu/joi/vol21/iss1/8

Witter-Merithew, A., & Nicodemus, B. (2012). Toward the international development of interpreter specialization: An examination of two case studies. Journal of Interpretation, 20(1), 8. http://digitalcommons.unf.edu/joi/vol20/iss1/8

Woodward, J. (1973). Some characteristics of pidgin sign English. Sign Language Studies, 3(1), 39–46. https://doi.org/10.1353/sls.1973.0006

Annotate

Next Chapter
2 A Native-User Approach: The Value of Certified Deaf Interpreters in K–12 Settings
PreviousNext
All rights reserved
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org