Skip to main content

Beyond Equivalence: Chapter 2

Beyond Equivalence
Chapter 2
    • Notifications
    • Privacy
  • Project HomeBeyond Equivalence
  • Projects
  • Learn more about Manifold

Notes

Show the following:

  • Annotations
  • Resources
Search within:

Adjust appearance:

  • font
    Font style
  • color scheme
  • Margins
table of contents
  1. Title Page
  2. Series Page
  3. Copyright Page
  4. Introduction
  5. Chapter 1
  6. Chapter 2
  7. Chapter 3
  8. Chapter 4
  9. References
  10. About the Authors

2

Rubrics for Assessment

We began our search for an effective and appropriate rubric by reviewing the history of interpreting assessment and the larger context of how this has shaped our broader concepts of assessment in general. There are many ways to approach assessment and evaluation, and usually the more varied, the more complete picture of competence we can form. Types of assessment include knowledge testing and performance testing. Tests of knowledge will use questions such as yes/no; multiple choice; fill in the blanks; and open-ended essays, which are often scored by identifying correct answers rather than reflective responses indicating a deeper analysis of a situation. Performance tests, such as completing a task on the spot or completing various parts of a task at different times, often use checklists and rubrics for scoring. These might range from brief and basic checklists: does the candidate have “it”: a bachelor’s degree, interpreting certification; to rubrics that score complex traits and performance success. Further, skills and knowledge are often assessed via third-party input, such as experts’ observations, supervisors’ reports, mentors’ feedback, or letters of recommendation.

Finally, all of these can be combined into portfolios, which are collections of competencies demonstrated via various avenues. Russell and Malcolm (2009) provide a comprehensive discussion of the AVLIC certification assessment, sharing an insider’s perspective of this interpreting exam. Malcolm (1996) provides a clear and succinct discussion of portfolios used in interpreting education. There are also a variety of more generic rubrics and rubrics development sites that can inform rubric development, including, as a few examples, Assessment Strategies and Tools: Checklists, Rating Scales and Rubrics; RubiStar; and Authentic Assessment Toolbox.1 Most often in performance assessment, some type of rubric is developed to assess the quality level of the performance. Rubrics are intended to take our subjective intuitions about a performance, in this case, an interpretation and/or the interpreting we see, and turn it into a process that is supported by evidence, observations, and objective measures to the extent possible. While no rubric will completely eliminate our intuition and expertise, it can and should help us explain and support our assessments and, in the end, help those being assessed learn and grow through the experience. Further, it can remind us of our biases and ingrained patterns of thought. For example, our own language styles, be they grounded or influenced by race, gender, age, nationality, and/or education, can easily result in us, as educators or as assessors, labeling something as wrong simply because it is not our way of communicating. This can be especially problematic when working with students or interpreters from marginalized linguistic communities. Using rubrics that remind us of the diversity of communication among people in different settings and with differing goals and intentions is a valuable addition to our skill set.

In the following sections, we will discuss the components of rubrics, along with specific interpreting practices. We review some performance assessment approaches and begin to look at specific constructs and features that people use to assess interpreting performance skills. You might find it useful to identify a rubric or checklist that you use or plan to use, or that your institution or agency uses, to explore its construction and practicality for assessing interpreting performance.

Rubric Components

While performance assessment rubrics can be categorized as either holistic or trait-analytic, it is more useful to consider these as a continuum, where a holistic rubric is more general, and often used in summative evaluations (e.g., certification tests), and trait-analytic rubrics are more specific, and are often used for formative teaching and learning purposes (the differences can be seen in the following rubric discussion). Regardless, every rubric requires three major components: (1) the domains to be assessed, (2) the scale used to assess, and (3) the descriptors that relate the scale to the expected skills in each domain. In addition, any rubric is most effectively used when parameters and specifically defined contexts are provided. These include defining the types of settings and texts where the rubric can be effectively applied, and describing factors that will impact assessment scores. These can include, but are not limited to, describing the purpose of the assessment, the receiver and presenter goals, as well as setting expectations, such as whether and how interpreters are to demonstrate process management (e.g., interruptions, need for clarifications, and indications of challenges in the environment).

Regardless of the level of detail provided, ranging from holistic to trait-analytic, every rubric must have these three components: domains, also referred to as criteria or categories; a scale; and descriptors that relate the scale to the domains (or the domains to the scale). To develop these, either for large-scale, professional-level gatekeeping (e.g., Registry of Interpreters for the Deaf [RID], Educational Interpreting Performance Assessment [EIPA], Board for Evaluation of Interpreters [BEI]) or for daily individualized on-the-spot assessments, rubric measures need to make consistent scoring possible across time and assessors, whether those assessors be certification evaluators, teachers in a program, or mentors working with novice interpreters. The more this is achieved, the more reliable the rubric is considered.

Domains

Domains are the essential elements of whatever skill is being assessed. In our considerations, the domains are the elements that we deem essential for creating an effective interpretation. One important and common sense requirement is that the domains being assessed are actually important to the overall performance, and completely cover the elements required. They are identified by experts based on research and by ongoing comparisons to consumers’ needs. If these requirements are met, the assessment is valid—it tests what it claims to test. Unfortunately, in our field, the domains that are often assessed fail to assess interpreting. Instead, the domains of many rubrics in our field focus primarily on assessing basic language skills. While these are certainly easily identified in an interpretation, they are fundamental prerequisites to effective interpreting, and are not interpreting domains in and of themselves. It is indisputable that competence in each of the languages involved is essential. But competence in each language must be established before studying interpreting. Developing and assessing language competencies is not the primary focus of the evaluation of interpreting. Language skills-centered approaches to assessing interpreting skills and competencies at best distract from, and at worst neglect, the assessment of the quality of the interpretation as a whole and do not do justice to the communities that depend on, and expect, effective interpreting in their daily lives.

Recent approaches to assessing interpreting have begun to focus on interpreting criteria. In spoken language interpreting, rubrics developed by Sawyer (2004) and Angelelli (2009) reflect an understanding of this need. Other approaches and analyses have begun to focus more on interpreting and on the ways each language user constructs meaning. Rubrics that focus on how meaning is constructed still include language issues. Still, they focus on how language is used in the source language and guide the reconstruction of similar meanings in the target language. Janzen (2005) explores the many layers of linguistic and cognitive decision making that an interpreter faces all the time. He explores the differences between ASL and English as languages and discusses many of the challenges that these differences bring to interpreting practice, especially if we are approaching it as a meaning-based activity in which we actively coconstruct meaning in our work. Rubrics that describe features, structures, and metastructures that reflect meaning in each language, and compare how they are used, both similarly and differently, assess interpreting performance2 (e.g., Angelelli, 2009; Sawyer, 2004).

Domains need to focus on how the interpretation and the interpreter reconstruct the content, intent, and discourse usages effectively for any given setting. Effective use does entail using appropriate discourse structures in the target. For example, if the source uses constructed dialogue for formal emphasis, the interpreter’s choice of appropriate target discourse features for formal emphasis needs to be assessed, rather than their ability to understand constructed dialogue. Each of the domains and subdomains discussed next focus on interpreting interaction using language, rather than on the production and/or comprehension of one language or the other. These are the criteria needed to create a successful interpretation for consumers.3

  1. Domain: Meaning Reconstruction (i.e., interpretation—the product) (Weighting = 75%)
  2. Subdomain: Content
    • reflects patterns of understanding and reconstruction of the major and minor themes
  3. Subdomain: Intent/Purposes
    • reflects patterns of understanding and reconstruction of the discourse and interactional intent of the participants (source and/or target)
  4. Subdomain: Communicative Cues
    • appropriate linguistic and communicative cues (e.g., lexical choices, discourse structures and strategies, contextualization cues) are reconstructed to achieve the intended purposes of the source and/or target participants

2. Domain: Interpreting Performance Repertoires (i.e., the interpreting process) (Weighting = 15%). Interpreter demonstrates skills in and solutions to interpreting processes and challenges, in:

  1. Subdomain: Content Management
    • clarifying intended purposes and goals (e.g., preparation of materials, educating consumers)
    • monitoring and management of accuracy, participants’ goals/purposes (e.g., notes when a mistake is made, information is missed, corrects it, and communicates it to the participants effectively)
  2. Subdomain: Interaction Management
    • presentation of professional self and alignment (with participants—source and target)
    • speed of presentation, interactions, turn-taking (overlaps, interruptions, backchanneling)
  3. Domain: Interpreting Setting Management (Weighting = 10%). Interpreter demonstrates creative skills and solutions in managing and maintaining an environment conducive to effective interaction of participants Subdomain: Situation/Environmental Management (e.g., visual and auditory access)
  1. Subdomain: Ergonomic Management. Interpreter demonstrates self-care (e.g., breaks, position)
  2. Subdomain: Ergonomic Management. Interpreter demonstrates self-care (e.g., breaks, position)

The concept of assessing interpreting, rather than language and vocabulary, is an old idea, but for many in signed language interpreting, it is a novel and sometimes challenging concept. Many of the actual features assessed are similar; it is more often the mindset that is challenging. For example, instead of assessing English or ASL vocabulary, the rubric reinforces the shift to assessing how the source content and intent are effectively (or not) interpreted, that is to say, coconstructed, in the target language by the interpreter. This may seem a small point, but as our rubrics often drive and inform feedback, so does our feedback inform and guide interpreters’ learning.

Many interpreters and educators today believe, implicitly and explicitly, that assessing language is assessing interpreting and that interpreting is simply a byproduct of language skills. Two commonly used rubrics reflect this approach (whether they reflect the mindset of the developers is not known). As an analysis of the EIPA (Maroney & Smith, 2010) has only a ~14% focus on interpreting. Following their approach, Winston (2023) analyzed several existing rubrics for their focus on interpreting rather than language. As another example, Taylor’s rubrics (2002, 2017) focus on interpreting only ~23% of the entire rubric. They are couched in examples of ineffective interpreting practices, and they offer explicit descriptions to those struggling with what people often label as interpreting when in reality they are struggling with basic language acquisition. These do not effectively measure interpreting performance and may be misleading to users who think they do so.

As noted previously, skills in both languages are critical foundations for interpreting. However, just as being a skilled ASL signer or English speaker does not magically turn a person into a skilled interpreter, assessing ASL and English skills does not magically measure interpreting competence. Any rubric that focuses on the correctness of language structures per se (ASL or English lexical levels, ASL or English prosodic production), levels of articulation (also known as sign production) affect, grammatical structures (ASL verb depiction, English grammar NMS, constructed dialogue, use of space) leads us to assess basic language competencies, and not interpreting. This type of assessment rubric, with such a focus on language rather than on interpreting, exacerbates the trend in interpreter education identified by Maroney and Smith (2010). They note that

interpreter education has primarily focused on ASL acquisition and competence of second language users. Historically, when interpreting students were not developing requisite ASL skills in short-term programs, programs were made longer and ASL requirements increased. This focus on ASL development neglects the development of the whole interpreter. (p. 6)

Two rubrics currently used in the field of ASL/English interpreting, the EIPA and Taylor’s domains (2002, 2017), are briefly discussed here in relation to the expectations of interpreting assessment domains.

Educational Interpreter Performance Assessment (EIPA)

As one example, an analysis of the EIPA (Maroney & Smith, 2010; Smith & Maroney 2018), which is a commonly used interpreting assessment, reveals that of the 36 equally weighted “interpreting” criteria listed as being assessed, 69% (25/36) focused on ASL production and fluency. Another 17% (6/36) focused on English production and fluency, and only 14% (5/36) focused on what might be considered actual interpreting skills and processes. Further, the language criteria under both languages focus on basic language competence at the phonology-syntax levels (e.g., use of [sign space] verb directionality/pronominal system; can read and convey signs). The “interpreting” skills criteria include “follows the grammar of ASL or PSE” and “sentence boundaries.”

However, a more detailed analysis, still counting only the criteria, shows that of the 25 criteria labeled as ASL, a more accurate/precise label would be “some sort of sign communication” since the EIPA specifically does not assess ASL, but rather any signing (including outdated terms such as “Pidgin Signed English” [PSE], now more accurately labeled “contact signing” by linguists). Likewise, “lag time” is now understood as processing time and not simply the amount of time the interpreter’s production lags behind the source’s production. In this count, there are only two of the 36 criteria that specify only ASL. These are:

  • Voice to Sign (production of signing)
    • Use of Signing Space: I. Location/relationship using ASL classifier system;
    • Interpreter Performance: J. Follows grammar of ASL or PSE (if appropriate).
  • Sign to Voice: Can read and convey signer’s:
    • A. signs; B. fingerspelling; and D. nonmanual behaviors and ASL morphology (highlighting/emphasis added).

Interpreting assessment is not basic language assessment. As such, the criteria for the EIPA are perhaps more focused on assessing language fluency rather than on assessing interpreting.

Taylor (2002, 2017)

Another commonly used rubric to guide interpreting assessment is Taylor (2002, 2017). This evidence-based pair of rubrics identifies a major issue in interpreter education in the United States, which is a serious lack of bilingual skills in both ASL and English. These rubrics, accompanied by examples and descriptions that are very useful for language improvement, and used for identifying and assessing skills in language, are more similar to the rubrics analyzed by Maroney and Smith (2010) for the EIPA. While the EIPA has 36 criteria and Taylor’s have only 13, they are remarkably similar in their content and focus on language, especially ASL, rather than interpreting (see Figure 1). Taylor lists eight features relevant to English to ASL interpreting—six are ASL language features, one is composure, appearance, and health, and one is interpreting. There are five major features listed in ASL to English interpreting, but two are also English features (English lexicon and English discourse) while two are relevant to ASL, and one is, again, composure and appearance. Comparing the features and categories of the two, we see that the primary focus of both the EIPA and Taylor’s work is on the language features of ASL, with the EIPA assessing 25 ASL features out of 36 total features (69%), and Taylor having nine of 13 (69%). For each, the next highest focus is on the language features of English, with the EIPA having six of 36 (17%) and Taylor having three of 13 (23%). In each, assessment of actual interpreting features is least important, with the EIPA assessing only assessing five interpreting features out of the total of 36 (14%) and Taylor assessing only one of 13 (8%).

Measuring language features, regardless of the language, is easy—we can see, hear, and measure them in isolation from the text as a whole. But interpreting encompasses the text as a whole, as the interpreter reconstructs their own perceptions and understandings of the source. Rather than focusing on the easily notable, we need to refocus on the functions achieved through the use of source language features and reconstructed into functions, structures, and features that achieve the same functions in the target language for the target audience (or for the presenter’s target audience—e.g., TV news aimed at the TV station’s audience, not a specific group of deaf people). To echo Maroney and Smith, it is time to “shift the focus from just ASL development/competence to include the professional practice of interpreting” (2010).

Scales

Scales can cover different scopes and levels of achievement. They should be contextualized within the profession; the performance levels should be established by the field and consumers and scored accordingly.4 Scales can range from measuring entry-level proficiency in a field to scales that measure growth and learning over time (e.g., from entry to exit of a preparation program; from the start to the end of a course or workshop). Typically scales have 4 to 5 levels.

Scales for performance assessment rubrics need to be based on observable, consistent, and reliable identified levels of achievement (valid), and rated by each assessor with a similar understanding of their intent (reliable). They are not based on the feelings, likes, or dislikes of a single assessor. They are not Likert scales, although many people confuse these with the Likert scale. A Likert scale is a rating scale, often found on survey forms, which measures how people feel about something. It includes a series of questions that you ask people to answer, with ideally five to seven balanced responses from which people can choose, and often comes “with a neutral midpoint” (downloaded September, 18, 2019, from https://wpforms.com/beginners-guide-what-is-a-likert-scale-and-how-to-use-it/). Performance assessment scales do not come with a neutral midpoint—they are not intended to measure our feelings about someone, and the scale increments are not intended to offer a set of balanced choices, but a series of ever-more skilled interpreting performances. However, this is rarely explicitly stated and again is often misunderstood, with those less experienced with assessment assuming that the highest level is required. While organizations responsible for assessment can arbitrarily set these scales, they rarely do so explicitly, creating confusion and at times even resentment and widescale loss of credibility by failing to do so.

In Figure 2, scales from various rubrics are shared to demonstrate the range of scales to be found across both spoken and spoken/signed language acquisition and interpreting.

A breakdown of different rubric scales for rating interpreting performance and language fluency. The first section covers scales for interpreting performance. WASLI: Beginning-1, Developing-2, Competent-3, Proficient-4, No level-5. NIEC IEP Outcomes: Novice presence,	Emerging presence, Strong presence, Mature presence, No level 5. US Gov: 1, 2, 3, 4, No level 5. Sawyer (2003, p. 241) or GSTI Faculty Handbook (p. 26): Fail, Borderline fail, Pass, High pass, No level 5. ATA: Minimal, Deficient, Acceptable, Strong, No level 5.
EIPA:	0, 1, 2, 3, 4, 5.

Taylor (2002, 2017): 1-N/A, 2-Not evident, 3-Emerging, 4-Inconsistent, 5-Consistent, 6-Mastered.

The second section covers rubrics for language fluency.

ASLPI (ASL Proficiency Interview)	0	1	2	3	4	5.

SLPI (Sign Language Proficiency Interview)	Novice	Survival	Intermediate	Advanced	Superior.

ACTFL: Novice, None, Intermediate, Advanced, No level 5.

Figure 2. Rubric scales. Note. WASLI = World Association of Sign Language Interpreters; NIEC = National Interpreter Education Center; GSTI = Graduate School of Interpreting and Translation; ATA = American Translators Association; EIPA = Educational Interpreter Performance Assessment; ASLPI = ASL Proficiency Interview; SLPI = Sign Language Procifiency Interview; ACTFL = American Council on the Teaching of Foreign Languages.

Descriptors

Descriptors are the third essential component of any rubric. These relate the scale to the domains, and indicate the types of skills, knowledge, and/or performance levels that are expected at each scale increment. They need to offer clear, consistent, and progressive performance descriptions at each scale increment. Effective rubric descriptors need to focus on observable evidence; they should not use language that judges an interpreter’s character, value, or worth (e.g., makes intelligent decisions; signs look nice; consumers will enjoy watching; demonstrates a mature performance).

It is also important that descriptors offer cohesive increments, adding levels of quality to performance in a domain, not new and different elements. Angelelli (2009) offers an example of consistent, incremental descriptors in Figure 3. The descriptors for the scale are graduated for each domain. The intervals range from consistent and major misunderstandings (lowest skill level = 1), through a flawed understanding (level 2), to a general understanding (level 3), to a complete understanding (level 4), to the highest level of a detailed and nuanced understanding (level 5).

Chart demonstrating how to evaluate translations on a 1 to 5 scale. Column 1: T shows consistent and major misunderstandings of the ST meaning.

Column 2: T contains elements that reflect a flawed understanding of major and/or several minor themes of the ST and/or the manner in which they are presented in the ST. There is evidence of errors in the interpretation that lead to the meaning of the ST not being fully communicated in the T.

Column 3: T contains elements that reflect a general understanding of the major and most minor themes of the ST and the manner in which they are presented in the ST. There may be evidence of occasional errors in interpretation, but the overall meaning of the ST [is] appropriately communicated in the T.

Column 4: T contains elements that reflect a complete understanding of the major and minor themes of the ST and the manner in which they are presented in the ST. The meaning of the ST is proficiently communicated in the T.
Column 5: T contains elements that reflect a detailed and nuanced understanding of the major and minor themes of the ST and the manner in which they are presented in the ST. The meaning of the ST is masterfully communicated in the T.

Figure 3. Criterion: Source text meaning. T= translation, TL=target language, ST=source text (Angelelli, 2009, p. 40).

These descriptors demonstrate varying levels of clear, consistent, incremental levels of performance, and focus on the work rather than the interpreter’s worth or value. Occasionally, and especially for formative educational purposes, rubric scales, such as the one used in the assessment rubric discussed in the next chapter, may be much more explicitly and discretely detailed. One example of this is the scale for the Interpreting Performance Assessment Rubric (Figure 4), discussed in detail in Chapter 3. The broader ranges between whole numbers are delineated so that smaller increments of skill development and growth can be more easily identified. Also included in this scale are indicators of expectations for professional performance, along with consumer expectations.

Chart defining the scoring for the Interpreting Discourse Performance Rubric. For someone to achieve the highest score of 5, they should consistent patterns of all skills and abilities are (5.0) detailed and nuanced; masterful. For a score of 4 (4.1–4.9), they should show consistent patterns of all skills and abilities that range from 4.8: often nuanced, 4.6: sometimes nuanced, 4.3: occasionally nuanced, 4.0: detailed and able. For a score of 3 (3.1–3.9), interpreters should show patterns of skills and abilities that range from 3.8: consistently adequately detailed/accurate and able, possibly with rare nuanced segments; 3.6: usually adequately detailed/accurate and able; 3.3: sometimes adequately detailed/accurate and able; 3.0: inconsistently detailed/accurate and able. To get a score of 2 (2.1–2.9), students show patterns of skills and abilities that range from 2.8: often somewhat adequately detailed/accurate and able, possibly with rare adequate segments; 2.6: sometimes somewhat detailed/accurate and able; 2.3: occasionally somewhat detailed/accurate and able; 2.0: rarely detailed/accurate and able. Finally, for a score of 0–1 (.1–.9).  Skills and abilities demonstrated are rare or not demonstrated that range from 1.5+–9: rare patterns of skills and abilities are identified; 1–1.5: some skills and abilities may appear occasionally, but few patterns are demonstrated; 0–0.9: few to no patterns of skills and abilities are demonstrated.

Figure 4. Scoring key. Note. 5.0 indicates an interpretation that reflects mastery; 4.0–4.9 indicates a consistently reliable/accurate interpretation and effective interpreting processes; 3.6–3.9 indicates a fairly reliable interpretation focused on content; somewhat effective interpreting processes. Consumers should be vigilant for accuracy. Supervisors should maintain frequent and regular input and observation.

Conclusions

One point that becomes clear from a review of all of these and other rubrics available to us is that they seem to appear in isolation, with minimal explanation of their purposes, applications, and potential uses. Detailed explanations of the domains and scales are lacking, and most notably, so are multiple examples of what interpretations produced by interpreters with various levels of interpreting ability in various contexts look and sound like. One imperative for assessment is that the assessors be regularly recalibrated to stay focused on what the rubrics intend rather than straying away from them. Even the act of determining which level of the scale is appropriate for various patterns of communication becomes problematic without regular recalibration. Having worked as reviewers, assessors, and educators of interpreting for many years, we emphasize that regular and frequent discussion of how and where our assessments align with others and our understanding of the domains, descriptors, and scales is not only helpful but essential.

Given the noted challenges and our need to focus on meaning- or communication-based interpreting, what does such a rubric look like, and how do we develop it? Given that assessors still have only the evidence (i.e., the forms of the product) and no access to the cognitive processes occurring during the interpreting, we still need to assess what we have and be very explicit about everything we might assume or infer. For that, it is perhaps as important to explicitly state what is being implied, and what needs to be inferred, based on a detailed and explicit description of the context and the purposes and intents of all participants.

We need a clear specification that identifies the evidence from the source presenter’s language and an understanding of the types of evidence that might evoke a similar construal of meaning for the target receivers (which must also be clearly and explicitly detailed). If we include the interpreter in the mix, as both source and target, we complicate the process even further—yet we must include the interpreter in this way since the interpreter is both a receiver and a presenter at every step.

Building on the evolving paradigms we are applying to our understanding of meaning as fluid and dynamic, of communication as flowing, and of interpreting as active communication among all the participants (presenters, audiences, and interpreters), some educators, researchers, and assessors have begun to construct interpreting assessment rubrics that support these concepts more accurately and effectively (Angelelli, 2009; Jacobson, 2009; Russell & Malcolm, 2009). And in addition, some have begun to develop rubrics that both draw from and support evidence-based teaching, active learning, and criterion-driven source text selection and specification. The meaning-based Interpreting Performance Assessment Rubric was developed as one approach to assessing communication-focused interpreting and interpretations.


1. Assessment Strategies and Tools: Checklists, Rating Scales and Rubrics: http://www.learnalberta.ca/content/mewa/html/assessment/checklists.html; RubiStar: http://rubistar.4teachers.org; Authentic Assessment Toolbox: http://jfmueller.faculty.noctrl.edu/toolbox/portfolios.htm

2. It should be noted that some interpreting performance rubrics do include some language assessment out of necessity based on interpreters’ actual language skills. This does not make the inclusion appropriate for assessing interpreting and is often an unavoidable reality. We must consider how our acceptance of this impacts the success of consumers.

3. These are the basic domains and descriptors we propose for a meaning-based interpreting rubric, and are discussed in depth in Chapter 3.

4. It is notable that as a profession, we currently have no standardized, fieldwide scales for language competencies or for entry to interpreting practice. As of 2022, the Conference of Interpreter Trainers (CIT) has been the first organization to develop and discuss any such standards, but they have not been formally adopted.

Annotate

Next Chapter
Chapter 3
PreviousNext
Powered by Manifold Scholarship. Learn more at
Opens in new tab or windowmanifoldapp.org