4
Contextualizing Assessment
The choice of source stimulus texts is as fundamental to the assessment and evaluation of interpreting as choosing the appropriate evaluation rubrics. Despite this, both informal discussion and interview data collected to date from experienced interpreting educators in the United States indicate that only some can provide solid, evidence-based criteria for that selection. By far the most common rationale for stimulus text choice seems, anecdotally, to be “gut feeling.” When pressed, some will expand the rationale to include “authenticity,” “good quality sound or video,” “length,” or “it works.” While such factors are important, they do little to enhance a deliberative selection of source stimulus texts, especially in education and evaluation.
The discussion of which source texts will most effectively draw out the kind of interpretation that reflects those features we intend to evaluate is essential to interpreting education and assessment. In ASL/English interpretations, for example, many features heavily weighted in evaluation are found with more or less frequency depending on genre and register in ASL source texts. These include the use of referential and prosodic space, nonmanual signals, fingerspelling, and hand shift. These linguistic features serve specific functions in ASL. Those same functions are served by a variety of linguistic features in English that are often different from those in ASL. For example, ASL combines spatial referencing with lexical items to create cohesion through discourse, often without directly renaming a referent; English requires more frequent renaming of a character to maintain clear referents throughout the discourse. It is the effective use of those features that contribute, in part, to the creation of a successfully coherent, dynamic equivalence in the target production.
The research described here shares the first stages of the input of an international group of interpreting educators and researchers (Winston & Swabey, 2011). Although many participants represent ASL/English interpreting and are from the United States, there are also participants researching and teaching other signed languages as well as spoken languages. The initial findings reported here can be applied to, and be beneficial for, educators and evaluators in considering the source as the starting point for assessing a target interpretation. It includes theoretical and evidence-based practices and practical applications for educators and evaluators.
Project Description
Beginnings
This project is an outgrowth of an earlier project in which several U.S. interpreting educators and researchers attempted to identify the expected interpreting skills outcomes of students graduating from 4-year interpreter education programs. The facilitators of this source text project, Dr. Winston and Dr. Swabey, became interested in source text selection when they realized, after working with the research group for over 2 years, that one major challenge was the result of “gut feeling” source text selection. An initial literature review yielded little information, and they determined to convene a group of leading experts to obtain a variety of perspectives and to identify existing resources.
Convening Experts: An Online Seminar
After identifying those researchers and educators known to have written about, or conducted research in the area of source text selection, the facilitators determined that an online seminar could best involve the worldwide group of experts. The seminar, titled “Source Text Selection for Interpreting Education,” ran from June 28 to July 2, 2010, and was hosted in an online, asynchronous setting. The original objectives of the online seminar were to (1) identify existing resources in source text selection in interpreting, (2) generate questions for further investigation and consider potential directions for future research, and (3) examine current practices in source text selection.
Participants
The facilitators convened a group of 20 international researchers and educators from both signed and spoken language interpreting, inviting them to participate in the 5-day online seminar. Invited participants were asked to commit to logging in twice a day and contributing three or four substantial postings during the conference. As a benefit of participation, each participant had access to the complete online seminar discussions after the seminar ended, through December 31, 2010.
Seminar Structure
The online seminar opened for prereadings on June 28, 2010, with active discussions beginning June 29 and continuing through July 2. Four forums were established, one for posting prereading resources, and three for active discussion. There were a total of 68 postings in the active discussion forums, which were:
- where do we find source texts (48 postings)
- factors in selecting source texts (5 postings)
- source text examples (15 postings)
Although the three discussion forums were established to spark conversation from different perspectives, all three were similar in content, with participants contributing input about choices, sources, rationales, and uses of texts across all of the forums.
Results/Findings of the Group
Importance of Source Text Selection
Two of the three objectives, identifying resources and examining current practices in source text selection, were addressed with some detail; the third, generating ideas for future research, was discussed in a few postings but not pursued in depth.
Objective 1: Identifying resources
Based on a review of the literature, a few resources were either posted or suggested as potential readings for participants. These included “Assessing Source Material Difficulty for Consecutive Interpreting” (Liu & Chiu, 2009); “Student Competencies in Interpreting” (Roberts, 1992); Introducing Interpreting Studies (Pöchhacker, 2004, Chapter 9); and Fundamental Aspects of Interpreting Education (Sawyer, 2004, Chapters 4, 8). By the end of the online seminar, more than 30 resources were shared with the group, ranging from articles and informational resources to radio and television broadcasts to personal experiences and specially produced texts. Many of the resources were sites that included a variety of individual source texts, along with tools and materials that supported their use as teaching texts. Participants usually included descriptive explanations about their contributions, describing why a text or source was helpful in their work. The list of resources is being prepared for public posting in the near future. All are being entered into a database that will be available for educators and evaluators to search and utilize.
Objective 2: Directions for Future Research
Discussion around this objective identified two important directions. Participants described their criteria for source text selection, indicating that they looked for “appropriate” levels of difficulty, relevance, speed, and density in the texts. The need to determine the parameters of “appropriate” in different settings and for various uses was identified. The need for understanding specific test specifications in more depth and for understanding where, when, and how they might be useful in our work was also identified as important for future discussions and research.
Objective 3: Current Practices in Source Text Selection
Discussion on this topic was rich and broad. The following summary is intended to present some preliminary groupings of the topics rather than a definitive description of the criteria for source text selection. The topic of source text selection as a meaningful focus was an important part of the discussion. Three reasons supporting the need for such discussion and research included minimizing the impact of interrater reliability in evaluation; establishing continuity across teaching practices; and, especially in interpreter education programs, contributing to fairness for students and test-takers.
This objective generated a great deal of in-depth discussion and expanded into two major subtopics, the purposes of source text selection and the features considered in source text selection. Each subtopic is summarized.
Purposes of Source Text Selection
Overall, the group identified two main uses of source texts in interpreting: evaluation and education.
Evaluation
Source texts, when selected for evaluation purposes, were expected to provide a snapshot of interpreting skills that demonstrated a minimum level of competence for a given domain or environment. Various target groups were identified as needing evaluation. These were the newly graduated student, the certified or credentialed generalist interpreter (e.g., National Accreditation Authority for Translators and Interpreters, RID, AVLIC), and the certified or credentialed specialist interpreter (e.g., conference interpreting, legal interpreting, educational interpreting).
Education
Source texts, when being used for educational purposes, were selected to provide ongoing practice to encourage growth toward competence, whether for students just learning to interpret or for skilled professionals to enhance skills or enter new specializations. Source texts were expected to produce target interpretations that allowed teachers and students to identify strengths and weaknesses in the interpreting products/processes and could also be used to demonstrate and document growth and progress.
Features of Source Text Selection
Four categories of specific criteria surfaced during the discussions. These were relevance, authenticity, text features, and multipurpose applications. Of these, the first three were similar regardless of evaluation or educational purposes.
The fourth, multipurposing, was discussed in the context of education and simply not addressed in relation to evaluation. It is important to note that these four categories are not intended to be discrete, mutually exclusive groups. Rather, they overlap in many cases. Participants agreed that, in principle, source texts should (1) match or appropriately challenge the interpreter’s current level of expertise (for teaching); (2) match the level of expertise deemed essential for working /certification in that arena (for evaluation); and (3) trigger linguistic/discourse features in target language production. Figure 11 illustrates the similarities and differences identified through these discussions.
Figure 11. Similarities and differences of criteria for selecting source texts.
Relevance
The relevance of a source text to the purpose and target audience surfaced as an essential feature among the participants. Both the content and contextual features of the source text needed to be relevant to user goals/needs for expertise. Areas that were emphasized in this area included:
- Discourse style/type: The source text needs to be of the same or similar types of discourse most often interpreted by the interpreter (e.g., formal presentations for testing conference interpreting skills, medical forms when teaching healthcare interpreting)
- Topic/content: The source text topic and content need to be similar to that which the interpreter will be working in their field or specialization (e.g., medical, diplomatic, academic)
- Number of participants: The source text needs to reflect the kind of interaction that the interpreter is being tested for (e.g., monologue/dialogue)
Authenticity
Authenticity was a second essential category that surfaced through the discussions. Participants emphasized that the use of real-world texts is important, agreeing that texts should be taken from real-world events whenever possible. This does not mean that students and interpreters should only practice in real-world events, which could impact the participants negatively. However, real-world recordings are abundantly available now. They can be found in places like training programs for teachers, nurses, doctors, and lawyers; they also have recorded videos for training their students, making them authentic communicative events for an interpreter to practice without negatively impacting the participants. Other sources include the many Rehabilitation Services Administration grant products recorded in classrooms, conferences, interviews, and so forth. Indeed, the internet is full of real-world interactions across many settings and people. It is contingent upon the educator and assessor to identify and analyze those recordings for the traits and factors being taught and assessed in any given course or setting.
However, there was also consensus that simulated authenticity (role plays with authentic participants; rereading of authentic presentations) is sometimes necessary for a variety of reasons. These include meeting students’ needs in learning; deleting unusable sections of an authentic text (e.g., too dense, too difficult, off-topic, inaudible); and rendering administration of the text more feasible (e.g., shortening, adding breaks for consecutive practice).
Text Features
Text features formed a third category of features that participants identified as important in selecting appropriate source texts. These features are those characteristics intrinsic to the source language that are predicted to trigger specific parallel features in the target interpretation. Not intended to be a comprehensive list, these features included speed, pace, metaphor, idioms, and grammatical structures.
Opportunities for Multipurposing
Especially important for those teaching interpreting was the ability/opportunity/potential to use a source text for many purposes throughout a course or curriculum. Some of the purposes identified included:
- spiraling the text throughout the students’ growth and learning (translation > consecutive interpreting > simultaneous interpreting)
- teaching students how to prepare for a topic
- teaching students how to analyze discourse
- providing opportunities to compare multiple or parallel versions of similar texts
- providing authentic tasks (i.e., allowing students to prepare for topics that they will need to eventually interpret)
- providing practice working with other interpreters
- practicing selective watching
Conclusion
The online seminar was closed for discussion on Friday, July 2. This report shares a summary of participant input about text selection. Additional products of the seminar were a list of resources and materials for gathering source texts, which are being prepared for public dissemination. Many of these resources were accompanied by commentary from the participants about the various applications and uses the participants found for them, both in testing and teaching. In addition to describing the resources and uses, many participants described their strategies for incorporating them into their teaching. The topic was pursued during a second online seminar, Garbage In = Garbage Out, in March 2011. Participants were presented with a variety of source text videos chosen based on the input discussed in this chapter. They were asked to rate them for potential usefulness and appropriateness for performance testing and teaching of interpreters at various skill levels. It is hoped that the results of these early discussions can be pursued further, expanding the base of knowledge for source text selection in education and assessment.