Skip to main content

Beyond Equivalence: Chapter 3

Beyond Equivalence
Chapter 3
    • Notifications
    • Privacy
  • Project HomeBeyond Equivalence
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Title Page
  2. Series Page
  3. Copyright Page
  4. Introduction
  5. Chapter 1
  6. Chapter 2
  7. Chapter 3
  8. Chapter 4
  9. References
  10. About the Authors

3

The Interpreting Performance
Assessment Rubric

We have aimed to develop a rubric that views interpreting as situated practice and that takes into account “pragmatic and contextual factors” (Quinto-Pozos, 2013, p. 120), rather than focusing solely or primarily on language use and equivalence at the word/phrase level. We hope that the rubric will support an assessment that approaches the text/interaction more globally, as a whole unit of discourse, and takes into account factors such as the communicative situation, the extent to which the interpretation supports participant aims, and the interpreter’s process/decision making. In developing the rubric, we have kept in mind Angelelli and Jacobson’s (2009) observations related to the difficulty of quality assessment in translation and interpreting:

none of the models of translation quality presented thus far [in their chapter] address the “how to” of effectively and accurately measuring quality. The researcher is left to ponder questions related to how “reader response” can be measured and compared; how to determine the variables that demonstrate whether a translation is acceptable to the target discourse community; or how the “function” of a translation is defined in measurable terms. These are all questions that have not been clearly addressed in the literature. (pp. 2–3)

The rubric (Figure 5) can be used to assess both simultaneous and consecutive interpretations across a variety of settings (e.g., education, community, healthcare). It assumes competence in each language’s foundational linguistic and interactional skills (English and ASL, or any other language pair). The rubric is intended to cover a range of uses, from a holistic standard to a more detailed breakdown of the components of the holistic standard. The rubric can be adapted to serve a variety of purposes, ranging from formative feedback for educational purposes to diagnostic assessment to summative evaluations of performance.

This figure is described in the text in detail.

Figure 5. Interpreting performance rubric—General domains/subdomains. Note. Weighting is suggested and should be adjusted according to source/target texts conditions as well as for consideration of authentic versus simulated performances.

The rubric is divided into three broad domains, each of which is comprised of two or three subdomains, all of which have been identified as contributing to effective interpreting performance. The domains are as follows:

  • Domain 1: Meaning Reconstruction—this is often referred to as “interpreting production” within the field
  • Domain 2: Interpreting Performance Repertoires—this is often referred to as the “interpreting process” within the field
  • Domain 3: Interpreting Setting Management

The remainder of the chapter describes the rubric and its domains and subdomains. It also introduces the scale and descriptors that accompany it. Our primary focus in the following sections is on discussing the rubric’s strengths, refinements, and challenges, as well as on concrete suggestions for use of the rubric.

Domain 1: Meaning Reconstruction

This category, weighted at 75%, assesses the product that we see and/or hear and the effectiveness of the reconstructed message. It replaces what we might traditionally think of as interpreting production. We prefer “meaning reconstruction” as it more precisely describes the object of the assessment. Mentions of assessment of product often refer primarily to assessment of language (vocabulary and grammar) and often minimize assessment of content. Moreover, few if any rubrics focus on the discourse structures that are chosen to reconstruct the source language’s goals.

For example, the use of constructed dialogue in formal ASL is intended to emphasize. An effective English target language reconstruction would include parallel/comparable formal English discourse forms of emphasis, which might include pitch, intensity, volume, and pausing, but rarely constructed dialogue. In other words, if the intent of the source/presenter is to emphasize, the assessment of the interpretation must focus on structures that emphasize, not on structures that are the same.

This category is comprised of three subdomains: content, intent, and communicative cues. The labels for these subcategories reflect current understandings of the complexities of discourse, communication, and interpreting. The wording of the domain and subdomain descriptors reminds us that interpreting, and all communication, is about reconstruction, not conveying or transferring, and that our focus is on doing so between languages and people, not within a single language or as an intellectual exercise.

Subdomain: Content

  • Assesses patterns of understanding and reconstruction of the major and minor themes

This subdomain focuses on the reconstruction of content. Assessment of this subdomain includes aspects such as omissions, inaccurate additions, and misrepresentations of content, including compression and distortions of ideas and/or links between ideas. It also involves recognizing and affirming effective patterns of content reconstruction. This subdomain is listed separately from those that follow, as the content may be reconstructed accurately even though intent and/or communicative cues may be inaccurate or misrepresented, or the communicative cues used may not be accurate.

Subdomain: Intent/Purposes

  • Assesses patterns of understanding and reconstruction of the purposes/intent of the participants (source and/or target)

This subdomain is not usually included as a separate component in rubrics. It is an aspect that can easily be missed by interpreters, particularly novices. It may also be challenging to evaluate, given that doing so requires the evaluator to make inferences or assumptions vis-à-vis each participant’s goals and the extent to which the interpreter and the parties (not to mention the assessor) have a shared understanding of said goals. The assessor must also make inferences in terms of assuming or concluding that a given interpreting choice was intended to support a given communicative goal. Nevertheless, we view it as essential. The subdomain takes into account:

  • discourse intent: each participant’s (individual) goal(s) in terms of content to be communicated
  • interaction intent: each participant’s (individual) goal(s) for the interaction/communicative situation

In the context of a math classroom, for example, the teacher’s discourse intent may be to demonstrate the application of a new formula or to introduce student teachers to a new teaching strategy. A presenter’s discourse intent might be to engage the audience in considering the ideas by using examples and to persuade them to adopt them. Interactive intent occurs when the teacher provides supportive feedback to student responses (“Yeah, exactly!”; “That’s perfect!”).

Example 1

Processes

Examples

Presenter Intent

the presenter intended to distinguish between top-down and bottom-up cohesion—including that in the interpretation is essential by using directional movement with the sign

Receiver Purpose

in this context, the receiver needs to see the illustration of cohesion in the PowerPoint slide as well as the interpretation of the presenter’s words—pointing to the slide, then interpreting would help them see the presenter’s point

Interpreting Process

the word “cohesion” should have been fingerspelled first, before using that sign

Linguistic Issue

that sign doesn’t mean “cohesion”/that is the wrong sign for “cohesion”

Depending on the type of source text, it is useful to further subdivide this to assess whether the intents of the presenter and the purposes of the receiver are effectively met through the interpreting. This subdomain is also structured to remind users of the full range of complex analysis needed for interpreting and for the assessment of interpreting (Russell & Winston, 2014). Analysis and assessment need to focus on patterns that reflect the interpreter’s considerations for these processes and encompass basic and easy-to-identify aspects, in addition to more complex aspects. Envisioned as a continuum, we can think of this subdomain as addressing linguistic issues (basic and easily identified) through interpreting processes, and on through addressing the intents and purposes of both those who are presenting and receiving the message. Figure 6 reflects this continuum.

A vertical arrow shape outlined in blue contains the word PROCESS. Beneath it is a square with the word BASIC. At the top of the arrow is another box with the word COMPLEX. To the right of the arrow are a series of four text boxes. Beginning at the top, the text boxes read as 
PRESENTER INTENT
RECEIVER PURPOSE
INTERPRETING PROCESS
LINGUISTIC ISSUE

Figure 6. Patterns that reflect goals and intents of participants.

To illustrate this continuum, an assessment might offer basic comments or consider more complex insights, as in Example 1.

In connection with this subdomain, it is also important to recognize (and take into account in assessment) the fact that individual interlocutors may not have shared goals for a given communicative event—their goals may diverge or be in conflict. Along these same lines, assessors (and interpreters) must be aware of situations in which successful communication is not the goal of one or more interlocutors (e.g., when the intent is to confuse or misdirect).

Subdomain: Communicative Cues

  • Assesses the reconstruction of appropriate linguistic and communicative cues (e.g., lexical choices, discourse structures and strategies, contextualization cues) to achieve the intended purposes of the source and/or target audience.

This subdomain is perhaps the most detailed in our development at this stage, with the most possible features and factors. One caveat, however, is that each source text offers only a few of the many options, and what might be assessed in a given situation is primarily dependent on the source text used as a stimulus, along with the defining parameters the interpreter is given before beginning to interpret (Winston & Swabey, 2011).

For the most effective assessment of this subdomain, we recommend creating a detailed discourse analysis of the communicative cues in the source and a similar analysis of the types of target cues predicted in an effective interpretation, with examples reflecting each scoring level.1 For example, identifying when and where the indirect description in English triggers constructed dialogue or action in ASL, and more specifically, where it might be triggered within the selected source texts, can be both an effective assessment aid and teaching resource. For one example, an English speaker’s statement: “Sometimes teachers ask students if they remember something they discussed in the past” would usually be signed in ASL using constructed dialogue.

Currently, many evaluators might see constructed dialogue and evaluate whether it is correctly and clearly produced. This rubric goes beyond that, asking evaluators to consider whether or not the choice to use constructed dialogue is appropriate for reconstructing the English speaker’s content and intent. The rubric also provides input about the level of accuracy of reconstructions, ranging from basic (vocabulary, phrases, and utterances) to complex (discourse structures and metastructures, such as openings and closings, topic transitions, and turn-taking). For example, the interpreter might choose constructed dialogue as the appropriate interpreting structure, but still misarticulate it. Assessment of this range of communicative cues can also be envisioned on a continuum, as shown in Figure 7.

A horizontal arrow outlined in blue with the words PRODUCT: INTERPRETATION inside. At its base is the word BASIC. At the arrow’s tip is the word COMPLEX. Four slanted text boxes starting at the base are labeled as WORDS/SIGNS, PHRASES/UTTERANCES, STRUCTURES, and METASTRUCTURE.

Figure 7. Continuum of discourse complexities.

To illustrate this continuum, see Example 2 for assessment comments ranging from basic to complex.

Example 2

Product

(Interpretation)

Examples

Metastructure

the formality of the turn-taking interactions (“with all due respect . . .” needed to reflect the expectations of the setting

Structure

ending of subtopic was clearly marked with lowered prosody for the next subtopic

Phrases/

utterances

An English discourse marker like “so then” or “next” was used when the signer shifted between topic and signed next

Words/signs

that was the wrong sign for “cohesion”

At this point, it is important to further consider the point made earlier in Figure 5, that assessment should be the same, regardless of the source language and target language. While this point, in general, holds true, it is also important to recognize that the structures and purposes that are triggered in each direction can be very different. When the interpretation being assessed is an interactive one (involving conversation, for example), it is useful to split this subdomain even further into communicative cues triggered when interpreting from one language into the other and vice versa. This further split means that scoring and input can be directed toward the reconstructions from and into each working language. Focusing on contextualization cues that are relevant to those working from English to ASL versus ASL to English makes it easier to identify differences in skills depending on direction and to provide input on those differences. Using a single domain and thus providing a single score for it masks these differences, which are important for learning and growth. We thus recommend that when the stimulus to be interpreted involves two source languages (i.e., in the case of an interaction), raters provide two scores for this subdomain, one for each direction of interpreting. If the source text includes a single language, the split is unnecessary.

Domain 2: Interpreting Performance Repertoires

This is the second major domain of the rubric, weighted at 15%. It assesses interpreters’ skills and solutions to interpreting processes and challenges. This domain is more difficult to assess than the first but is both necessary and helpful. It encompasses visible behaviors and invisible cognitive processes, the latter of which the assessor must infer based on what is seen or heard during the interpretation. This domain, as well as the following one, are accompanied by the same caveat as the previously mentioned subdomain of communicative cues. Each aspect of the domain can only be assessed if a specific repertoire is needed, and in many cases, it may not be. For example, if no interruptions or clarifications are required, it is not possible to assess the interpreter’s level of skill in performing them.2 This domain can be assessed in real-time performances and using recorded stimuli. In a real-time setting, the interpreter needs to manage these challenges effectively; when interpreting recorded stimuli, the interpreter needs to indicate clearly when these challenges impact the quality of the interpreting.

Subdomain: Content Management

This subdomain includes two facets:

  • clarifying intended purposes and goals—for example, preparation of materials, educating consumers)
  • monitoring and management of accuracy, participants’ goals/purposes—for example, the interpreter notes when a mistake is made or information is missed, corrects it, and communicates it to the participants effectively

As with communicative cues, this category asks evaluators to leave behind our previous approach, which focused primarily on assessing processing time (or the more outdated notion of assessing “lag time”), counting how long the person waits before starting to interpret. This category requires that we make inferences about the interpreter’s processing and decision making based on what we observe in their performance. Careful observation of external (observable) use of control mechanisms during performance can provide valuable insight into the interpreter’s processing and self-monitoring (cf. Herring, 2018); assessors can also take note of and evaluate the effectiveness of the control mechanisms employed by the interpreter, within the context at hand, regardless of whether the context is a real-life interpreting setting or a classroom practice session. This might include observations/questions such as:

  • Does the interpreter ask the signer/speaker to repeat themselves or to slow down, as needed to support effective performance? Or, using video or in a classroom context, does the interpreter pause the video or otherwise act to promote the most effective interpretation?
  • When interpreting simultaneously without the possibility of stopping the source language user, does the interpreter inform the target language user if/when the interpreter has missed something, and contextualize or offer information about what was missed?
  • When interpreting multiparty interactions, does the interpreter effectively handle turn-taking and/or overlapping talk?
  • Does the interpreter effectively and smoothly achieve transparency, such as keeping source language and target language users informed of side conversations (e.g., requests for repetitions or clarification)?
  • Does the interpreter take into account the needs of all parties to the communicative activity, or only their own? How is this accomplished?

Careful observations of interpreters’ management of the flow of content, as well as monitoring of its completeness/accuracy and responses to issues that arise, can inform raters’ evaluations of interpreters’ processing, decision making, and ability to manage the situation. These areas also overlap with the previously discussed subdomains of content, intent, and communicative cues, as well as the following subdomain, interaction management.

Subdomain: Interaction Management

This subdomain includes two facets:

  • presentation of professional self and alignment—taking into account all parties (and thus all languages) involved in the communicative situation
  • speed of presentation, interactions, turn-taking (overlaps, interruptions, backchanneling)

This category overlaps to some extent with content management but focuses the rater’s attention more directly on the interpreter’s approach to/management of interaction (rather than on the content of the text, per se). For example, in a given performance, an interpreter may have effectively reconstructed the content but have managed the interactional aspects of doing so in an awkward, clunky, or otherwise ineffective fashion.

This category can be assessed in both live (real-time) performances and interpretations of recorded stimuli; in the latter case, evaluators must inform interpreters of their expectations vis-à-vis this subdomain. The category “presentation of self and alignment” includes instances when an interpreter chooses to introduce themselves (or not), clarifies a mistake or omission, or repeats a missed segment. Interaction management is also assessed when the interpreter pauses (or not) the interaction for clarifications, indicates turn-taking effectively, or manages overlapping speech or signing.

Domain 3: Interpreting Setting Management

This is the final domain of the rubric, weighted at 10%. It focuses on the interpreter’s skills in managing and maintaining an environment conducive to effective interaction among participants. It encompasses two subdomains, which are combined for discussion here.

Subdomain: Situation/Environmental Management and Subdomain:
Ergonomic Management

The first of these subdomains focuses on issues related to the physical environment, including visual and auditory access. The second focuses on issues related to ergonomics, good work habits, and self-care. These are essential skills; however, depending on the interpreting performances to be evaluated, they may not be triggered or observable in a given performance (cf. Herring, 2018). Individual differences are also likely to come into play—a given aspect of a situation may give rise to the need for management in one situation or for one interpreter but not in another situation or for another interpreter. Assessment of these subdomains must therefore be nuanced and take these caveats into account; it must also consider the relative contextual relevance of the subdomains to a given performance.

Rubric Scale

Scales for performance assessment rubrics need to be based on observable, consistent, and reliable identified levels of achievement. Rubric scales and ratings should not be based on feelings or on a single assessor’s likes and dislikes. Scales should eliminate terms that evoke judgments of personality and values such as “good” and “bad.” The descriptors developed for the Interpreting Performance Assessment Rubric have been designed in accordance with these best practices. They aim to be descriptive of the product and processes of interpretations and to avoid emotive/judgmental language. They are particularly influenced by spoken language models that reflect these goals (e.g., Angelelli, 2009; Jacobson, 2009; Sawyer, 2004).

In addition, it is fundamentally important for all assessors using a rubric to have a similar understanding of the scale, the categories, and the levels, in order to achieve consistency across raters. Multiple raters using a rubric within an institution or organization must also regularly meet to discuss their understandings of the rubric and of the ratings to avoid drift and to maintain consistency across raters.

The rating scale accompanying the Interpreting Performance Assessment Rubric is based on a traditional 0–5 scale, which can be found on many rubrics. Many rubrics rate based on whole-point or half-point scales (e.g., 2.5, 3, and so on), especially for purposes such as hiring or certification. However, because this rubric is intended for use in education as well as testing and certification, the individual 0–5 ratings have been divided into tenth-point segments (i.e., from 0.1 to 0.9) to allow for more nuanced ratings. These increments allow evaluators to recognize smaller, yet notable, gains over time (for example, while enrolled in a formal educational program). An overview of the ratings appears in Figure 8.

Chart defining the scoring for the Interpreting Discourse Performance Rubric. For someone to achieve the highest score of 5, they should consistent patterns of all skills and abilities are (5.0) detailed and nuanced; masterful. For a score of 4 (4.1–4.9), they should show consistent patterns of all skills and abilities that range from 4.8: often nuanced, 4.6: sometimes nuanced, 4.3: occasionally nuanced, 4.0: detailed and able. For a score of 3 (3.1–3.9), interpreters should show patterns of skills and abilities that range from 3.8: consistently adequately detailed/accurate and able, possibly with rare nuanced segments; 3.6: usually adequately detailed/accurate and able; 3.3: sometimes adequately detailed/accurate and able; 3.0: inconsistently detailed/accurate and able. To get a score of 2 (2.1–2.9), students show patterns of skills and abilities that range from 2.8: often somewhat adequately detailed/accurate and able, possibly with rare adequate segments; 2.6: sometimes somewhat detailed/accurate and able; 2.3: occasionally somewhat detailed/accurate and able; 2.0: rarely detailed/accurate and able. Finally, for a score of 0–1 (.1–.9).  Skills and abilities demonstrated are rare or not demonstrated that range from 1.5+–9: rare patterns of skills and abilities are identified; 1–1.5: some skills and abilities may appear occasionally, but few patterns are demonstrated; 0–0.9: few to no patterns of skills and abilities are demonstrated.

Figure 8. Scoring key, Note. 5.0 indicates an interpretation that reflects mastery. 4.0–4.9 indicates a consistently reliable/accurate interpretation and effective interpreting process. 3.0–3.9 indicates a fairly reliable interpretation focused on content, and a somewhat effective interpreting process. Consumers should be vigilant for accuracy. This figure is a duplicate of Figure 4.

As noted, whole/half number increments can mask evidence of performance improvement over time. For educational purposes, the smaller, tenth-point increments used in this scale are particularly helpful in terms of reflecting areas of strength and areas that need more focus (improvement). Additionally, it is important to understand that the rating scale for this rubric is a performance scale rather than a Likert scale. While Likert scales have an equal increment between each rating level, performance-based scales, like those used in interpreting and language assessments have unequal increments between levels.

Because this rubric is intended for teaching as much, if not more than, for summative certification tests, the performance-based scale corresponds more closely with academic grading in U.S. institutions. Referring to Figure 9, the scoring levels can be compared to the grading levels in many U.S. university courses. On the performance scale, an increase from 0 → 1 → 2 → 3.5 can be compared to an increase in grades of F (with scores of 0–49%) to D (with scores of 50–64%). Although a student may have completed a significant amount of work, demonstrated growth, and improved by 65%, they are all still performances that fall below the minimally acceptable level. The increments between 3.5–4.0 (65–79%) reflect a growth in skills that approaches a minimally adequate performance of a C/C+, build on the previous 64% of learning, and reflect finer tuning, more depth and detail, and require more critical and analytical skills to achieve/traverse. Similarly, the increment from 4 to 4.5 (B = 80–89%) requires even more fine-tuning, and the distance between 4.5 (B+ = 90–95%) and 5 (A = 95–100%) represents nuanced improvements that may take years to achieve.

A chart showing how the rubric scores match with traditional grades and percentages.

A score of 0¬–1.5 equals 0–35% or a grade of F. A score of 1.5+ to 1.9 equals a 36 to 49% or an F+. 
2–2.5 equals 50–59% or a D-. 2.5–2.9 equals 6—64% or a D. 3–3.5 equals 65–69% or a D+. 3/5+–3.9 equals 70–79% or a C-, C, C+. 4–4.5 equals 80–89% or a B. And 4.5¬–4.8 equals 90–95% or an A-. 4.9–5 equals a 95–100% or an A.

Figure 9. Scale aligned with percentages and traditional U.S. grade scheme. Note. The print version of this figure is grayscale, while the Manifold and epub editions have this figure in color.

While experienced evaluators may understand the differences, for many students and novice practitioners, and perhaps even some new educators and mentors, the whole number points on the scale are often misconstrued as reflecting equal increments on the scale. It is important to be aware that they are not, in fact, even steps when looking at an individual’s changes from one assessment to the next. For example, an individual’s performance might show an increase of more than 10 points, but where these changes on the scale occur is important. An improvement of 10 points from mid-60% to mid-high 70% might seem large but does not reflect a readiness to work. On the other hand, a relatively small increase of only 2 points from 93% to 95% actually reflects a significant increase and reflects the crossing of a significant threshold. An interpreter who increases from 79% to 80% crosses the threshold for this rubric of “needing supervision” to “able to work independently.”

Because this rubric relies on a 0–100 point system, it was easy to convert the scores to percentages and compare them to the grading systems used in most U.S. education systems. In the future, this can make the scoring and understanding of it more familiar for those in the United States who are familiar with these academic scoring levels. In Figure 9, the scale is aligned with both percentages and institutional grades. One addition to this rubric scale is color coding. This proves to be a useful gauge of the correspondence of each level to the overall scale. It serves as a reminder of the scoring mindset for the performance assessment.

Conclusion

Through reviews of assessment constructs and content and discussions with educators, mentors, and students, we have found that many interpreters and educators believe, implicitly and explicitly, that assessing interpreting is assessing language and primarily assessing ASL. Moreover, many believe that interpreting is simply a byproduct of language skills. As discussed previously, this rubric is designed to encourage a move away from such approaches. Although the rubric discussed here does in some ways resemble other existing rubrics, and many of the actual features assessed are similar, the conceptualization of interpreting and assessment—the mindset informing the rubric—has shifted. For example, instead of assessing English or ASL vocabulary, the rubric guides the user to shift to assessing how the interpreter effectively (or ineffectively) deconstructs and reconstructs content, intent, and other aspects of communication, jointly with the participants in the interaction. Experienced educators who use the new rubric may find that such a shift in mindset is challenging.

Although the unintentional and unrecognized use of our ingrained metaphors will require time and patience to eliminate, we need to work to do so. Just as our rubrics often drive and inform feedback, so does our feedback inform and guide interpreters’ learning. Shifting from a narrow understanding of communication—as a transfer of meaning based on words and linguistic issues—to a broader and more inclusive focus on all aspects of discourse, including the participants’ intents and purposes in communicative acts, is essential for all involved in interpreted interactions. Instead of perpetuating the idea that meaning is a static entity and the concept that language is an immutable conduit for conveying meaning, we need to radically reshape our language and eliminate these metaphors from our professional repertoires. It is time to reconceptualize communication as a fluid process involving attempts to share our ideas and thoughts using tools that never completely nor adequately incorporate them. Instead of discussing language and communication as an exchange of tokens resulting in “correct,” “full,” or “exact” understanding, we need to construe meaning as dynamic and envision communication as a fluid process that is emerging and everchanging, depending on the currents, undercurrents, and observable logjams that we can distinguish in the flow. Our conceptualization of interpreting—and of assessment of interpreting performance—needs to be nuanced and wide-ranging, as reflected in Figure 10. We must embrace this paradigm shift because our understandings and concepts of communication directly impact our work, whether as interpreters, educators, or assessors (Janzen, 2005; Quinto-Pozos, 2013; Wilcox & Shaffer, 2005).

A figure illustrating the differences and similarities of teaching and assessing effective communication and teaching and assessing interpreting. This figure is divided into two sections. The left section is labeled TEACHING AND ASSESSING EFFECTIVE COMMUNICATION. The right section is labeled TEACHING AND ASSESSING EFFECTIVE INTERPRETING. Centered between these sections is a vertical arrow, outlined in blue with the word PROCESS inside. At its base is the word BASIC. At the top is the word COMPLEX. To its left is a vertical text box with the word COMMUNICATING. To its right is another vertical text box with the word INTERPRETING. Along both sides are four horizontal text boxes going down the side of the arrow. The first reads PRESENTER INTENT, the second RECEIVER PURPOSE, the third INTERACTION PROCESS, and the fourth LINGUISTIC ISSUE. At the arrow’s base are two horizontal arrows, one pointing left, and the other pointing right. Both have the word BASIC at their bases and COMPLEX at their tips. The left horizontal arrow is labeled PRODUCT: DISCOURSE, and the right horizontal arrow is labeled PRODUCT: INTERPRETATION. Both horizontal arrows have four text boxes below. Starting with the box closest to the word COMPLEX, the text boxes read METASTRUCTURES, STRUCTURES, PHRASES/UTTERANCES, and finally, WORDS/SIGNS.

Figure 10. Structures and processes of interpreting and interpreting assessment.

The next stage of the rubric development will be to introduce it and use it in various settings and contexts. We anticipate that this will be an interesting challenge and look forward to confronting it.


1. Such an analysis is more feasible when using a simulated interpreting setting and text, which allows for detailed preanalysis. However, experienced interpreters and assessors can analyze sources and targets fairly effectively during authentic assessment settings as well.

2. While this may seem an obvious point, it is important when providing specifications for any assessment to inform the reviewers about what to do when aspects are not observed. Interpreters should not be expected to manufacture the need for clarification simply because they will be docked points if they do not.

Annotate

Next Chapter
Chapter 4
PreviousNext
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org