Psychology
Vol.09 No.03(2018), Article ID:83182,8 pages
10.4236/psych.2018.93021

Effective Assessments for Interpreter Education Programs to Increase Pass Rates for Certification

Rosemary Liñan Landa1,2*, M. Diane Clark1

1Department of Deaf Studies and Deaf Education, Lamar University, Beaumont, TX, USA

2University of Texas at Rio Grande Valley, Brownsville, TX, USA

Copyright © 2018 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: January 18, 2018; Accepted: March 18, 2018; Published: March 21, 2018

ABSTRACT

In the United States, students enrolled in an American Sign Language (ASL) Interpreter Education Program (IEP) are encouraged to achieve interpreter certification upon completion of the program. Obtaining the certification ensures employers they are hiring qualified personnel. A critical examination of assessments used by IEPs may result in formulating strategies that prepare students to pass the state-based assessments by graduation. A review of Program Learning Outcomes (PLO) and the way in which these are assessed should be undertaken in order to ensure that effective Student Learning Outcomes (SLO) are created to parallel appropriate stages of language development. Knowledge of Language Assessment Literacy (LAL) will assist in the development of well-defined and accurately-based objective assessments that are both valid and reliable. Targeting specific linguistic components combined with the usage of the cognitive domain of Bloom’s Taxonomy for teaching and assessing learning outcomes will provide a clear means for developing assessments. Utilizing Bloom’s Taxonomy, the development of assessments start at lower order cognitive processing and progressively moves to higher order (Marzano & Kendall, 2006) . An examination of how using Bloom’s Taxonomy assists in the development of assessments is outlined.

Keywords:

Interpreter Education, Language Development, Bloom’s Taxonomy, American Sign Language

1. Introduction

There are approximately 119 American Sign Language (ASL) Interpreter Education Programs (IEPs) throughout the United States (Registry of Interpreters for the Deaf, 2014) . Upon graduation, and in some cases as a prerequisite for graduation to obtain a diploma, students are expected to take an interpreter certification test. Standards in how the assessment of students’ performance in the program and whether or not those student assessments translate into successful certification (passing of state or national assessments) have not been studied. Lam (2015) highlights the importance of educators’ knowledge and utilization of such assessments as instrumental in measuring progress of those learning a second language. The development of language interpreting ability, spoken or signed, is progressive and it is imperative there be adequate student growth towards set learning outcomes after each course completion. The work of interpreters is highly valuable in that it provides equal access to communication in a range of settings (Damian, 2011) . The national Registry of Interpreters for the Deaf (RID) notes that the number of certified interpreters is inadequate (Registry of Interpreters for the Deaf, 2015) . Therefore, IEP graduates who go on to become certified are in high demand (Webb & Napier, 2015; Witter-Merithew & Johnson, 2005) .

As vital as certified interpreters are, most IEP faculty are unaware of how their knowledge of language assessment can be a resource (Damian, 2011) . Further research on classroom-based assessments pertaining to the use of ASL by students is needed. An examination of current assessments can reveal the key to unlocking the transition of competence mastery in class to competence mastery on the state-based assessment granting certification. At stake it’s the ASL interpreting profession’s ability to keep up with the current demand for certified interpreters serving three million deaf and hard of hearing consumers in the United States (Zazove, Meador, Reed, & Gorenflo, 2013) . Also, at stake is the ability of graduates to obtain certification ensuring employers they are hiring qualified personnel.

2. ASL Interpreter Education and Assessment

The question is how do IEPs assess student performance? There is little published on the subject across IEP programs in all languages and even less related to ASL interpreting. Kasilingam, Ramalingam, & Chinnavan (2014) found that many reports regarding IEP assessment methods are vague. The most common reported assessments included discourse completion tasks. Discourse completion tasks attempt to elicit a deducible response from students by prompting them with a language sample. While student responses test comprehension, they are not necessarily conclusive (Van Compernolle & Kinginger, 2013) .

Furthermore, Van Compernolle & Kinginger (2013) go on to uncover a consistent failure on the part of IEPs to show relative reliability and/or validity among assessments. Across spoken and sign language IEPs, there is a need for assessments that are valid and reliable in tracking students’ ability throughout the program as they gain skills and competencies. These assessments need to be developed following psychometrically sound methods. This requires that assessment designs include SLOs for each course or series of courses that students complete (Poehner, 2011) . Well-defined and accurately-based course assessments will assist with faculty objectivity when grading, and subsequently, increase transparency of course learning outcomes for students (Timarova & Salaets, 2011) .

Language Assessment Literacy (LAL) stands for “the acquisition of knowledge, skills, and principles of test construction, test interpretation and use, test evaluation, and classroom-based assessment” (Lam, 2015: p. 170) . Broadening our knowledge of LAL can marshal a meaningful advancement in student performance assessment criteria. A move toward effective use of assessments to measure student abilities objectively across course completion is worth considering. Both IEPs and students will experience empowerment, in that milestones will be listed as PLOs that students are explicitly aware of and can use when reflecting on their own progress (Fulcher, 2012).

Given the scarcity of information regarding assessments in use by ASL/English Interpreting Programs, further research is necessary to identify what is currently used and how effective those assessments are in preparing students to face state-based certification tests. One notion, set in place by the Board of Evaluation of Interpreters (BEI), which is administrated by the Texas state board granting ASL interpreter certification, is to seek out specific competencies requiring mastery of ASL/English proficiency and assess those competencies objectively. The competencies of BEI include the following: specialized vocabulary, register variation, rhetorical features, vocabulary, grammatical structures; and appropriate sociocultural discourse. Additionally, features specific to ASL including the use of classifiers, non-manual markers, accuracy of fingerspelling, numbers, the use of sign space, and grammatical space, are tested by the BEI (Department of Assistive and Rehabilitative Services, 2012) . Utilizing a similar objective measure targeting specific competencies combined with Bloom’s Taxonomy (Marzano & Kendall, 2006) for teaching and assessing learning objectives can be a new direction in the advancement of future IEPs.

3. Bloom’s Taxonomy and Assessment

Bloom’s Taxonomy has influenced teaching and assessment of learning objectives for many years (Marzano & Kendall, 2006) . The main objective of the taxonomy is to be able to have students demonstrate a deeper learning and to be able to use this knowledge in generalizable contexts (Adams, 2015) . The discussion in Kasilingam et al. (2014) points to Bloom’s cognitive domain as one that allows objective evaluation of student performance (Marzano & Kendall, 2006) . Cognitive methods of assessment are “collaborative assignments requiring students to engage in the problem or are project-based activities” (Kasilingam et al., 2014: p. 28) . In addition, these cognitive tasks provide increasingly complex practice striking an intricate balance between achievement motivation and expectancy of goal attainment, which relate to the belief that a certain act is needed in order to achieve a certain goal. Sufficient arousal of both achievement and goal attainment results in high levels of motivation (Timarova & Salaets, 2011) . Moderate levels of stress provide motivation leading to successful task mastery. On the other hand, when stress levels are too high, student performance decreases as the student can no longer perform (Timarova & Salaets, 2011) . Interestingly, students who scored lower on the debilitating anxiety scale suggested a higher tolerance for stress, which is a trait found in interpreters that is highly advantageous (Timarova & Salaets, 2011; Gile, 1995) . Tracking learner performance through a series of increasingly demanding tasks is essential. Usage of Bloom’s cognitive domain can assist IEPs, not only in the designing of methods for assessment, but also in the levels of methods that are currently being used to evaluate IEP student achievement (Marzano & Kendall, 2006) .

Bloom’s Taxonomy entails six levels, arranged in order from lower order cognitive processing to higher order cognitive processing (Marzano & Kendall, 2006) . The first, beginning at the lower order level, is remembering. Remembering includes items such as the following: listing, describing, or naming. Assessments in this case, therefore, are as straightforward as eliciting recall in manners such as multiple choice or short answer questions (Marzano & Kendall, 2006) . Students might feel that this type of remembering is more complex as the difference in modality from spoken versus Signed Languages requires students recall English to ASL equivalencies and vice versa (Damian, 2011) . Assessment methods should have students recall by utilizing multi-media methods that offer the opportunity for students to become accustomed to perceiving the language visually via proficient users (Golos & Moses, 2015) . IEPs can pace the request for recall using timed assessments revealing the level of automaticity at which students are performing when recalling or completing multiple choice or short answer tests. These types of assessments also standardize assessment tools for all users.

The second level is comprehension or understanding (Marzano & Kendall, 2006) . For IEPs, assessing both comprehension and production of language is necessary. Spoken languages typically differ from ASL in that comprehension precedes production; whereas, in ASL, sign production precedes comprehension (Damian, 2011) . At this level, educators are now additionally asking students not only to recall but comprehend the assessment enough to interpret, summarize, and/or paraphrase. Students must demonstrate that they comprehend the meaning from the provided information. Assessments of this type include opportunities for interpretation of scripted tasks, including viewing ASL stories and story reproduction, which demonstrate the ability to paraphrase and retell.

The third level in Bloom’s Taxonomy is application (Marzano & Kendall, 2006) . Here students need to apply knowledge to utilize the information, “knowledge, and skills, or technique in new situations” (Adams, 2015: p. 152) . Starting with untimed assessments of comprehension skills students could then transition to timed receptive assessments. In untimed assessments, students are asked to analyze and review comprehension of increasingly complex source material at their pace prior to completing the assessment. Whereas in timed receptive assessments, there is no review and assessments are completed as the source material plays. While there is a brief period given for processing time, the assessment is completed within a specified time limit. Initially, practice with consecutive interpreting material could be attempted in an effort to bridge knowledge and skills in a new method of application. In consecutive interpreting “the languages do not interfere or overlap” thus “making it easier to isolate language components”; however, faculty can additionally set a time limit to keep up with the consecutive sequence (Damian, 2011: p. 260) . Students need to simulate basic interpreting performance skills, demonstrating their ability to navigate each language relatively fluently at each course level. Again, IEPs can set the pace via sign speed and consecutive breaks.

Blooms’ fourth level is analysis (Marzano & Kendall, 2006) . Here educators ask students to break down information “into its component parts in order to identify the most appropriate terms” (Adams, 2015: p. 152) . Assessments of this type include students’ linguistic analyses of their own interpretations between ASL and English. Deconstructing interpretations while also comparing and organizing linguistic components from each language, students’ ability to explore relationships between the languages improves. Independently students begin to take ownership of the process of interpreting.

Initially, analysis will also require evaluation. The fifth level of Bloom’s taxonomy is evaluation (Marzano & Kendall, 2006) , and this entails “providing learner feedback” and it “appraises the validity” of learner assessments (Adams, 2015: p. 153) . Students will require guidance during their development of self-analyses. Instructors’ additional constructive feedback, alongside students’ own analyses of linguistic components in their interpretations, will be beneficial. In conjunction with analysis, evaluation will serve as a mentoring tool in an effort to foster students’ ability to review their own work.

Finally, creativity is at the highest-order of cognitive processing in Bloom’s taxonomy (Marzano & Kendall, 2006) . Creativity is defined as being able to “create a novel product in specific situations” (Adams, 2015: p. 153) . Simultaneous interpreting assessment are especially challenging in that students “must practice split attention”; they must “listen to the source language and their interpretation at the same time” (Damian, 2011: p. 262; Gile, 1995) . Interpreters are required to process “incoming information and executive decisions at a rate faster than or consistent with the incoming information” (Macnamara, Moore, Kegl, & Conway, 2011: p. 122) . Therefore, it is important to continue challenging students to practice split attention by providing various simultaneous interpreting assessments. Another option is to create assessments that model national or state-based assessments to further increase the complexity of the task. The types of assessments provide simulations of what is expected for future graduate employability.

Knowledge of LAL and Bloom’s Taxonomy for teaching and assessing learning outcomes will provide a clear means for evaluating current assessments in use. Also, for further reflection on how future assessments can be developed, a visual reference of how the six levels of Bloom’s Taxonomy assist by lower to higher order cognitive processing is presented in the Appendix. IEPs need to undertake a thorough review of PLOs and their paired methods of assessment (Marzano & Kendall, 2006) , ensuring SLOs develop progressively and parallel language development, thereby, moving from lower to higher order cognitive processing. A combination of targeting specific linguistic components and the development of well-defined and accurately-based objective assessments, that are both valid and reliable, provide a future direction worth consideration.

Cite this paper

Landa, R. L., & Clark, M. D. (2018). Effective Assessments for Interpreter Education Programs to Increase Pass Rates for Certification. Psychology, 9, 340-347. https://doi.org/10.4236/psych.2018.93021

References

  1. 1. Adams, N. E. (2015). Bloom’s Taxonomy of Cognitive Learning Objectives. Journal of the Medical Library Association, 103, 152-153. https://doi.org/10.3163/1536-5050.103.3.010 [Paper reference 5]

  2. 2. Damian, S. (2011). Spoken vs. Sign Languages—What’s the Difference? Cognition, Brain, and Behavior: An Interdisciplinary Journal, 15, 251-265. https://www.questia.com/library/journal/1P3-2386683691/spoken-vs-sign-languages-what-s-the-difference [Paper reference 6]

  3. 3. Department of Assistive and Rehabilitative Services (2012). BEI General Study Guide. https://hhs.texas.gov/sites/default/files//documents/doing-business-with-hhs/providers/assistive/bei/bei_study_guide.pdf [Paper reference 1]

  4. 4. Fulcher, G. (2012). Assessment Literacy for the Language Classroom. Language Assessment Quarterly, 9, 113-132. https://doi.org/10.1080/15434303.2011.642041 [Paper reference 1]

  5. 5. Gile, D. (1995). Basic Concepts and Models for Interpreting and Translation Training. Philadelphia, PA: John Benjamins Publishing Company. https://doi.org/10.1075/btl.8(1st) [Paper reference 1]

  6. 6. Golos, D. B., & Moses, A. M. (2015). Supplementing an Educational Video Series with Video-Related Classroom Activities And Materials. Sign Language Studies, 15, 103-125. https://doi.org/10.1353/sls.2015.0005 [Paper reference 1]

  7. 7. Kasilingam, G., Ramalingam, M., & Chinnavan, E. (2014). Assessment of Learning Domains to Improve Student’s Learning in Higher Education. Journal of Young Pharmacists, 6, 27-33. https://doi.org/10.5530/jyp.2014.1.5 [Paper reference 3]

  8. 8. Lam, R. (2015). Language Assessment Training in Hong Kong: Implications for Language Assessment Literacy. Language Testing, 32, 169-197. https://doi.org/10.1177/0265532214554321 [Paper reference 2]

  9. 9. Macnamara, B. N., Moore, A. B., Kegl, J. A., & Conway, A. A. (2011). Domain-General Cognitive Abilities and Simultaneous Interpreting Skill. Interpreting, 13, 121-142. https://doi.org/10.1075/intp.13.1.08mac [Paper reference 1]

  10. 10. Marzano, R. J., & Kendall, J. S. (2006). The New Taxonomy of Educational Objectives. Thousand Oaks, CA: Corwin Press. [Paper reference 13]

  11. 11. Poehner, M. E. (2011). Validity and Interaction in the ZPD: Interpreting Learner Development through L2 Dynamic Assessment. International Journal of Applied Linguistics, 21, 244-263. https://doi.org/10.1111/j.1473-4192.2010.00277.x [Paper reference 1]

  12. 12. Registry of Interpreters for the Deaf (2014). Interpreter Education Program. https://myaccount.rid.org/Public/Search/Organization.aspx [Paper reference 1]

  13. 13. Registry of Interpreters for the Deaf (2015). The Field of Interpreting—Opportunities and Growth. https://www.rid.org/about-rid/about-interpreting/ [Paper reference 1]

  14. 14. Timarova, S., & Salaets, H. (2011). Learning Styles, Motivation and Cognitive Flexibility in Interpreter Training: Self-Selection and Aptitude. Interpreting, 13, 31-52. https://doi.org/10.1075/intp.13.1.03tim [Paper reference 4]

  15. 15. Van Compernolle, R. A., & Kinginger, C. (2013). Promoting Metapragmatic Development through Assessment in the Zone of Proximal Development. Language Teaching Research, 17, 282-302. https://doi.org/10.1177/1362168813482917 [Paper reference 2]

  16. 16. Webb, S., & Napier, J. (2015). Job Demands and Resources: An Exploration of Sign Language Interpreter Educators’ Experiences. International Journal of Interpreter Education, 7, 23-50. http://www.cit-asl.org/docs/IJIE_7-1_Issue.pdf#page=26 [Paper reference 1]

  17. 17. Witter-Merithew, A., & Johnson, L. J. (2005). Toward Competent Practice: Conversations with Stakeholders. Alexandria, VA: Registry of Interpreters for the Deaf. [Paper reference 1]

  18. 18. Zazove, P., Meador, H. E., Reed, B. D., & Gorenflo, D. W. (2013). Deaf Persons’ English Reading Levels and Associations with Epidemiological, Educational, and Cultural Factors. Journal of Health Communication, 18, 760-772. https://doi.org/10.1080/10810730.2012.743633 [Paper reference 1]

Appendix