Skip to main content

Beyond Equivalence: Chapter 1

Beyond Equivalence
Chapter 1
    • Notifications
    • Privacy
  • Project HomeBeyond Equivalence
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Title Page
  2. Series Page
  3. Copyright Page
  4. Introduction
  5. Chapter 1
  6. Chapter 2
  7. Chapter 3
  8. Chapter 4
  9. References
  10. About the Authors

1

Reconceptualizing Communication

Just as the interpreter’s approach to the work is shaped by their understanding of what interpreting is, how it should be done, and what it should achieve (Janzen, 2013), the educator’s approach to evaluation is informed by their conceptualization of the interpreting task and ideal interpreting performance. As human-made artifacts, rubrics are imbued with the understanding of interpreting of those who created them. We thus begin this chapter by briefly discussing the conceptualizations of interpreting, communication, and meaning which inform our approach and are the underlying foundation of the Interpreting Performance Assessment Rubric.

Reconceptualizing Interpreting

Ways of understanding interpreting as a task/activity have evolved over the years. Authors reflecting on these shifts in the last decades of the 20th and first decades of the 21st century, particularly concerning ASL (American Sign Language)/English interpreting, have identified several metaphors1 that interpreters and educators have employed as frames through which to approach and understand interpreting performance (Janzen & Korpinski, 2005; Wilcox & Shaffer, 2005). One of the most common of these metaphors is that of the interpreter as a “conduit” (cf. Hoza, 2021; Roy, 1999; Wadensjö, 1998) or a “passive and invisible medium through which information passed” (Herring, 2018, p. 98). Nowadays, this metaphor is widely viewed as reductionist, incorrect, and outdated. However, as Wilcox and Shaffer (2005) eloquently argue, the field has not fully moved beyond this conceptualization of interpreting. While we may have moved beyond describing interpreters as conduits, we have not yet left behind the idea that meaning is fixed, that it is an entity which can be conveyed, much as one would convey an object from point A to point B. Such a perspective leads us, as interpreters, interpreter educators, researchers, and language teachers, to misconstrue the concept of meaning and the activity of communication, and therefore, the concept of interpreting other people’s communication: As Wilcox and Shaffer (2005, p. 28) put it, “through various iterations of the ‘role,’ educators have shifted focus from actual ‘communication’ to political and cultural behaviors of the interpreter.”

Although many people continue to conceptualize meaning as static, as something that is packaged and transferred like bits and bytes of data, this is a misrepresentation that ignores the complexity of how meaning is negotiated and constructed through discourse. In any communicative event, meaning is constructed through the lenses of content, intent, and form. Participants in a communicative event must construct their own understanding of a given text, whether signed or spoken, by drawing on their knowledge and experience and making necessary inferences. Chafe (1994, 2001, 2014) describes the processes involved in understanding: One person produces symbols (signs or sounds) that have been associated (by society, as part of a language) with thoughts or concepts; those symbols are then transmitted to others, who associate them with their own thoughts in order to unpack and understand them. The latter part of this process is far from straightforward. Communication, and the achievement of mutual (albeit partial) understanding, is thus a highly complex process. It is only further complicated when interlocutors do not share a common language, and their communicative acts are mediated by an interpreter (Angelelli, 2000; Wilcox & Shaffer, 2005).

In connection with this discussion, it is important to distinguish between the notions of form and meaning. While meaning is shared, co-created, and unique for each participant, form is the packaging—signs, facial expressions, sounds, and/or gestures used with communicative intent—employed by interlocutors to share and coconstruct meaning in interaction. Form, which includes linguistic, pragmatic, and contextual cues, is conveyed to our eyes and ears by light and sound waves, by electronic impulses, by print on paper, and by many and various other means. Such forms provide us with the evidence we use to build our understanding, and those forms can be measured, counted, described, and captured. Meaning, however, cannot be counted or measured. Meaning is, by nature, fluid and different for each person, and influenced by individual filters, experiences, and world knowledge. People use language, gesture, and context to share their ideas and intentions. As participants in interaction, we analyze each of these aspects in our efforts to understand the meaning and intent of others.

Communication is at the heart of our work as interpreters, and, as such, a nuanced understanding and appreciation of communication must be the foundation of our understanding of the interpreting task and our approach to interpreter education. As interpreters, we must embrace the concept that communication is inferential and that language is not simply a conduit for thoughts and ideas. We must develop a deeper understanding of “meaning” and how it is negotiated and constructed through discourse. As educators, we must promulgate the view that interpreting is not simply a conveyance for replacing and substituting static lexical units or grammatical structures between languages. As Janzen (2013, p. 95) reminds us, educators have a fundamental role in shaping learners’ understanding of and approach to the work: “How the interpreter educator talks about meaning ‘transfer’ influences how students of interpretation apprehend the text, work to understand meaning, and recognize their own role in the process.” It is therefore incumbent upon us to reflect on and critically examine the conceptualizations of interpreting through which we approach our work, both as interpreters and educators, to avoid contradictions and disconnects that may lead us, albeit unwittingly, to pass on misconceptions to our students.

There is a broad range of factors that might be included in assessment, and more specifically, in the assessment of interpreting. The assessment of meaning-based interpreting performance has a much narrower focus, directly assessing the product (interpretation) and indirectly assessing the process (interpreting decision making). Although some may use simple checklists for assessment, most frequently some type of rubric is developed to evaluate the quality of these two factors in any single assessment. Indeed, a range of such snapshot assessments can be gathered to build a portfolio that reflects not only performance but also reflects the growth of an interpreter’s skill and decision-making abilities. Reading this section, it may be helpful to have in mind the interpreting performance rubric used for your specific institution or organization to compare it to the expectations of rubric creation and applications that we discuss.

Regardless of the final form of any assessment tool, as we explore what we assess, we must first establish our expectations of assessment tools: What we expect them to be and do and what we must carefully avoid expecting them to be or do.

Assessments should be:

  • valid, which means they must adequately assess the relevant, and only the relevant, aspects of interpreting;
  • reliable, ensuring our confidence that results are consistent regardless of the assessor; and
  • authentic, appropriately assessing the actual experiences of interpreting.2

Further, if we are also educators, they should also:

  • measure student success at learning what they need to learn as budding professionals;
  • effectively inform our decisions about success in learning and therefore our teaching; and
  • provide the type of evidence we need to be able to measure effective learning and effective teaching. (Wiggins & McTighe, 2005, Chapters 7, 8)

Approaches to interpreting performance assessment often cling to outdated, outmoded misconceptions about what interpreting is, what an interpretation is, and how to best assess it. Many of these pitfalls are part and parcel of a larger trap: measuring what is easy to see or hear (Fenwick & Parsons, 2009), rather than what is important and relevant for an effective interpretation and effective interpreting. The various pitfalls and misconceptions about interpreting, and therefore the directions where we need to refocus our assessments and what appropriately needs to be assessed, are briefly summarized in Figure 1 and are further explored in Chapters 2, 3, and 4.

A two-column chart showing how outdated ideas about interpreting can be reconceptualized for assessments. The first column head is labeled as Interpreting Misconceptions. A green arrow begins in the first column head and points to the second labeled Reconceptualized Focus of Assessments. Misconception: Meaning is a static thing that can be conveyed. Reconceptualization: 
Effective reconstruction reflects the presenters' intents and the audience's needs

Misconception: Language assessment IS interpreting assessment. Reconceptualization: 
Interpreted discourse effectively reflects the presenter intent and audience's needs.

Misconception: Rules re: frozen/written/language: grammar, syntax, etc., apply to discourse. Reconceptualization: Interpretation reflects the interactional and cultural competencies of the participants.

Misconception: Interpretation should be assessed differently based on its direction. Reconceptualization: 
Achievement of participant communication goals, regardless of the source and target languages.

Misconception: Miscue/error analysis is adequate. Reconceptualization: Identify effective, as well as ineffective, strategies and repertoires.

Figure 1. Interpreting misconceptions and facts.

Regardless of the purpose of the assessment, it must be grounded in the fundamental concepts of validity, reliability, and authenticity (e.g., Sawyer’s [2004] appropriateness, adequacy, and confidence). The assessment (and those using it) must be able to clearly demonstrate that an assessment does assess what it claims to assess. That is, an interpreting performance assessment must assess concepts/constructs relevant to interpreting (validity), not prerequisites such as basic language fluency, for example. In terms of reliability, it must demonstrate that every person expected to review results responds with similar levels consistently. If the assessment is a national certification test, then extensive evaluation of both validity and reliability must be completed and provided. If it is a classroom test, extensive testing may not be possible or necessary. But the teacher and the students should be on the same page about what is being assessed and at what levels.

The History and Evolution of Assessment From Spoken Language
Translation to Today

Most of our current practices of assessing interpreting have been handed down via spoken language models that focused initially on translation, were then modified to apply to spoken language interpreting, and eventually, to sign language interpreting. As we understand more and more about meaning-based interpreting assessment, it is important to recall our foundations. Some are essential to our current approaches; others serve to remind us of how we have ourselves evolved. As such, many of the concepts need to be explored for assumptions and principles that (1) may (and may no longer be) supported by current science and state-of-the-art research, and (2) may (or may not) be applicable in time-driven interpreting when compared to translations. Yet, too often, those sources have been integrated and then forgotten. We all would benefit from actively examining the broader world in which our field is situated. It is time we engage in this broader perspective, thus avoiding reinventing the wheel and instead, gaining opportunities to learn and grow.

Many traditional interpreting performance assessments currently in use analyze the lexical, semantic, and grammatical levels of linguistic equivalence achieved in the product, and then determine whether the target translation was “faithful to the source, objective and impartial” (Angelelli & Jacobson, 2009, p. 2). However, linguistic equivalence is rarely relevant in a quality interpretation. What is relevant is achieving the function and purpose of the participants, communication via interpreting. Reviewing the history of a few other perspectives on interpretation quality and range, Angelelli and Jacobson (2009) identify some who emphasize the importance of understanding the functions and purposes of interpreting. Nida’s (1964) discussion of dynamic (more aimed at the target audience) and formal (more aimed at the linguistic structure) equivalences is one example. He intertwines the importance of both the content and the form in both the source and target texts. He emphasizes that content and its form are inseparable, and that all content must be considered within the cultural contexts of time, setting, and function of the original work to be translated. Nida calls for a sociolinguistic approach as our most effective choice in addressing quality and adequacy in translation. Although published nearly 60 years ago, Nida’s discussion is timely for us today as as we examine issues in signed language interpretation beyond the scope of lexical items, errors, and grammatical correctness. Other examples include Newmark’s (1982) communicative/semantic distinctions; Toury’s (1995) notions of acceptability (norms of the target culture)/adequacy (norms of the source culture); and the overall idea of Skopos theory (Nord, 1991, 1997)—the purpose of the translation determines that translation’s quality.

An interesting and important split in early spoken language discussions is especially enlightening as we explore the evolution to our present-day approaches to assessing of interpreting performance (Sawyer, 2004). This split was between the objectivist linguists’ perspective of word equals meaning and the sociolinguists’ perspective that focused on interaction. The latter recognized that the spoken word (or written text in translation) did not embody meaning but rather only reflected it (e.g., Seleskovich & Lederer, 1995). This is a distinction we struggle with today, as we see a similar split in performance assessment approaches, where many interpreting assessments conflate words and signs with meaning. Interpretations and interpreting are often assessed in isolation from communication functions and interaction goals. Meanwhile, interpreting researchers interested in interaction continue to explore the goals and activities of interaction, and attempt to integrate linguistics (e.g., discourse analysis) as they can.. Few people in interpreting and interpreting performance assessment have integrated these concepts into effective interpreting assessments.

Another growing perspective that is largely excluded from interpreting assessment is that of cognitive science and cognitive theory in interpreting. As Angelelli and Jacobson (2009) conclude:

[However,] . . . none of the models of translation quality presented thus far address the “how to” of effectively and accurately measuring quality. The researcher is left to ponder questions related to how “reader response” can be measured and compared; how to determine the variables that demonstrate whether a translation is acceptable to the target discourse community; or how the “function” of a translation is defined in measurable terms. These are all questions that have not been clearly addressed in the literature. (pp. 2–3)

And, they still need to be sufficiently addressed for us in interpreter education and assessment. Angelelli (2009) moves us closer to these discussions, suggesting the construct of “translation ability” and how we might use rubrics. Interweaving crossdisciplinary ideas considers integrating our understandings of test development, communicative translation (Colina, 2003), Skopos theory (Nord, 1991, 1997), and crosscultural communicative competence theories (Bachman, 1990; Hymes, 1974; Johnson, 2001). We can add to these fields our growing understanding of adult learning and learning-centered educational approaches. Angelelli (2009) reminds us that we are part of a larger field of spoken language interpreting and translation studies. She emphasizes that defining the test construct is the first step in test construction, and she grounds her discussion in the literature. Finally, she offers examples from a rubric that she has developed to assess translation competency that professional associations could use.

We are reminded to think beyond our often limited introductions to the broader field (Angelelli, 2009; Angelelli & Jacobson, 2009; Pöchhacker, 2004; Sawyer, 2004), and to not limit ourselves by adopting the individual theories that one or two researchers have explored without first carefully considering why we are doing so. However, there is no question that their work is invaluable! We owe it to those early researchers, ourselves, and our field to expand and evolve from their seminal work.

Reconceptualizing Assessment of Interpreting Performance

Having explored the implications of our conceptualizations of interpreting for education and performance, in this section we turn our attention to assessment.

Historically, assessment of ASL/spoken English interpreting in the United States has been primarily focused on the identification and analysis of errors (“miscues” in Cokely’s [1986] well-known nomenclature), which are generally understood to be deviations from meaning. This approach is problematic since it has tended to view meaning narrowly, as something that is in the text, thus perpetuating “the view that meaning can be discovered and transferred and that deviation from an objectively ideal transfer can be quantified” (Janzen, 2013, p. 104). Assessment approaches that rely on
equivalence-based miscue analysis have been described as:

akin to what a chemist would do when determining the weight of a compound: place a known quantity of weight on one side of a scale, the compound on the other side, and remove or add the compound as necessary to make the scale come into balance. But meaning is not so neat; communication is not chemistry. Meanings across languages cannot be weighed on a balance to determine objective equivalence. (Wilcox & Shaffer, 2005, pp. 44–45)

An additional problematic aspect of assessment rooted in conduit or transference conceptualizations of interpreting is the fact that such approaches are imbued with the assumption that communication is by nature successful—that achieving equivalence of meaning will ensure successful communication. This assumption is a faulty one; in fact, communication is often fraught with partial and/or complete misunderstandings. (e.g., Bazerman, 2014; Chafe, 1994, 2001, 2014; Reddy, 1979, 1993; Tannen, 1986, 1989).

Viewing interpreting through the perspective of conveyance and transference, rather than on the basis of the more nuanced and contextualized conceptualization described in the previous section, has far-reaching implications for the assessment of interpreting, both in terms of process and product. Implicit within a conveyance/transfer view is the assumption that meaning exists as a single, correct, and immutable artifact. This assumption, whether explicit or implicit, leads to approaches to assessment that involve comparing target language product with source language meaning, or that seek to evaluate equivalence (or lack thereof) between the source language and target language texts. Such approaches do not reflect current understandings of the coconstructed nature of communication. As Wilcox and Shaffer (2005, p. 45, emphasis in original) highlight, “we do not have direct access to the meaning (as if there is only one!) of the source text, and, if we are third-party evaluators of an interpreted text, neither do we have direct access to the meaning of the target text.” As evaluators of someone else’s performance, we only have access to our individual deconstructed (and reconstructed) understanding of the text.

Assessment-related discourse that refers to determining equivalence between source language and target language suggests, erroneously, that there is only one right way to interpret a given text. It implies that complete equivalence of meaning between a source and target text is achievable, although, in reality, the notion of equivalence is a fuzzy one involving inference, approximation, and compromise (cf. Baker, 2018). Approaches to assessment that view meaning as in the text and that focus primarily on whether or not equivalence was achieved lead evaluators—and interpreters—to mistakenly assess interpretations in a binary fashion: That is, that a given interpretation is good or bad, meaning that, on the one hand, it has achieved equivalence, and, on the other hand, that has not done so.

This type of binary, error-focused approach does not reflect the contextualized nature of communication, and does not allow for nuanced assessment of interpreting performance and product. It also fails to account for the situatedness of communication—as Quinto-Pozos (2013, p. 12) reminds us, “a message is communicated in different ways to people across different situations, and this must be considered when evaluating interpretations”—and for the potential for miscommunication inherent in any attempt at communication.

An additional issue that compounds the problematic nature of assessments focused on miscues and error analysis is that, in the U.S. context, assessment has tended to focus on linguistic issues to the detriment of other aspects of interpreting performance. The scope of analysis is often narrowed in scope, such that the primary focus is on the interpreter’s language use, and, more specifically, on individual words, signs, or phrases (and, in many cases, specifically on usage of ASL), rather than on other aspects of interpreting performance and decision making (Smith & Maroney, 2018; Winston, 2023). The tendency to focus on assessing language (and, in particular, ASL) can be traced back to the fact that interpreter education in the United States has often involved teaching language alongside, or even instead of, teaching interpreting. This has led to a lopsided educational program in which the development of “the whole interpreter” (Smith & Maroney, 2018, p. 6) is neglected because of the need to focus on the acquisition and development of language skills in ASL. We argue that assessment of both students and professional interpreters must evaluate interpreting rather than being primarily focused on language, which is only one facet of the interpreting task.

Our approach to assessment must consider more than surface-level linguistic equivalence and draw on our understanding of the coconstructed and situated nature of meaning within a communicative context. As part of this approach, we must move beyond focusing solely on errors and also attend to the (in)effectiveness of the interpretation in context. Such an approach will also take into account “the specifications of the particular translation to be performed and . . . the user’s needs” in judging the “adequacy” of the interpreter’s performance (Hatim & Mason, 1990, p. 8). Successful communication—whether it is interpreter mediated—should not be taken for granted. Rather, as Bazerman (2014, p. 229) states, “[instead of] taking transparency of language as the norm, we should rather take those situations that achieve high degrees of alignment, shared meaning, and reliability of co-reference as specific accomplishments, to be examined for the special means of achievement in their situation.”

Although the users of the interpretation are the final arbiters of the (in)effectiveness of a given interpretation, assessment of the product in consultation with users cannot be our sole form of assessment. In assessing interpreting performance and ability, our focus must encompass both product quality and process—the interpreter’s skill and decision making (Angelelli, 2009; Jacobson, 2009; Kim, 2009; Larson, 1984; Nida, 1977; Russell & Malcolm, 2009). Particularly in the context of interpreter education and professional certification, we must have mechanisms to assess the processes that led to the production of the product. Interpreting involves an array of complex processes and decisions, all of which are reflected in the product. Therefore, a complete assessment of interpreting performance requires analysis and evaluation of both process and product and must take into account “the context and facts about shared and unshared knowledge among participants” (Quinto-Pozos, 2013, p. 120).

In moving away from the binary approach to judging equivalence (or lack thereof), we must adopt a holistic, evidence-based approach to assessment, fully appreciating the complexities of communication, coconstruction of meaning, and interpreting performance. Assessment of interpreting performance must be based on research about successful communication with and through interpreters. We need to study the features of a successfully interpreted interaction and then base our assessments on those criteria. Our aim must be to assess effective interpreting practice rather than to identify deviations from an imagined ideal. We must take as our starting point the expectation (norm) that communication success is likely to be partial and that participants’ understandings and worldviews are likely to differ—and then focus our assessment on the factors and aspects of the interpreter’s work that have contributed to the communicative success, which needs to include the participants’ judgments of the interpretation’s effectiveness in supporting their (the participants’) communication. In doing so, we must point to and analyze moments of success and areas for concern; we must address instances of “meticulous strategizing” (Leeson, 2005) as well as evidence of inadequate processing or decision making. In so doing, we can productively employ terminology such as “(in)effective,” “(un)successful,” or “(not) functionally accurate” to describe interpreting performance. Rather than discussing achieving equivalence, we might more productively discuss approximating, as closely as possible, the interpreter’s best understanding of the source language meaning, taking into account the context/situation, broadly writ, and the people involved in communication. As we pursue the reconceptualizing and refocusing of interpreting assessment, we next review rubrics as the tool we have chosen to frame our project.


1. These are commonly referred to as “models” in the literature; however, we avoid that usage here, as they are more appropriately described as metaphors or analogies.

2. Sawyer (2004, p. 13) uses the terms “adequacy,” “appropriateness,” and “confidence” to describe methodological issues in assessment research—yet, what is assessment but “research” into the competence of an interpreter and the quality of an interpretation? Applying these terms to assessment may be helpful to readers who have been long put off by the terms:

Validity = does the test adequately assess the full realm of interpreting, nothing more and nothing less? Does it focus on interpreting processes and products, and not on language skills?

Confidence = reliability: Do the results give us confidence in the interpreter’s process and product? Can we expect that if someone passes, they can interpret? Do all assessors produce similar results?

Appropriateness = authenticity: Does the test assess the appropriate competencies needed for the context/task? If general, does the text reflect general skills? If medical-specific skills, does it reflect those?

Annotate

Next Chapter
Chapter 2
PreviousNext
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org