Creative Education
Vol.06 No.08(2015), Article ID:56501,11 pages
10.4236/ce.2015.68080

Academic Diversity and Assessment Process for CS Program Accreditation

Arif Bhatti, Irfan Ahmed

College of Computers and Information Technology, Taif University, Taif, Kingdom of Saudi Arabia

Email: a.bhatti@tu.edu.sa, i.ahmed@tu.edu.sa

Copyright © 2015 by authors and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY).

http://creativecommons.org/licenses/by/4.0/

Received 14 April 2015; accepted 17 May 2015; published 21 May 2015

ABSTRACT

Underdeveloped countries have realized that the investment and the improvement in higher education are very important for the progress. This is evident in the number of new universities and institutes of higher learning are established in the last 20 - 30 years. In these universities, administrative processes are well established but academic processes are not. Accreditation bodies such as ABET do not dictate any specific assessment process but want each candidate program to have an assessment process that is reliable and can find out weak points in the program to continuously improve. This paper describes a successful assessment process adopted at College of Computers and Information Technology, Taif University.

Keywords:

Assessment, ABET, Evaluation, Learning Outcomes

1. Introduction

Higher education programs are meant to produce individuals with pre-defined skills to succeed in real life. Assessment processes are deployed to quantitatively measure the program achievement levels and to find out what areas are weak that require further improvement. Even though assessment processes deployed for computer science programs at different universities look similar but their actual implementations can differ drastically if faculty members in the program are not trained in the similar academic environments using similar standards.

In North America and Europe, institutions of higher education and their programs have well established assessment processes and standards. Faculty members in these institutions are tenured, have similar qualifications and are trained in similar academic cultures. Underdeveloped countries have realized the need and the importance of institution of higher learning which results in establishment of newer universities offering new programs. Faculty members teaching in these programs usually are usually non-tenured, trained in different academics environments and sometime even have different mix of qualifications.

This nature of heterogeneous faculty in a program makes the assessment process more complex that requires establishment of common standards and ensures that faculty members are following the established standards. This enforcement reduces the flexibility that an instructor may enjoy at a more established program. Figure 1 shows breakdown of faculty according to the country where they complete their education and academic training.

Assessment is defined as one or more processes that identify, collect, and prepare the data necessary for evaluation. The assessment process of student learning outcomes has four key phases: the specification of learning outcomes, the alignment of assessment methods with student outcomes and teaching methods, the use of different assessment methods to gather evidence of student learning, and the use of assessment evaluations to improve the educational program (Crawley et al., 2014) . For newer programs with heterogeneous faculty, assessment also includes definitions of common standards, well-designed courses with clearly identified outcomes and mechanisms to ensure that faculty members adhere to the standards and the courses. Evaluation is defined as one or more processes for interpreting the data acquired through the assessment processes in order to determine how well student outcomes are being attained.

Assessment processes and assessment results are reported mostly on campus in faculty and management meetings. According to National Institute for Learning Outcomes Assessment report (Kuh et al., 2014) , in 2013, only about 35% of campuses made assessment results publically available on their websites or in publications.

Assessment processes that focus on continuous improvement are broadly studied in literature. One of the main challenges in assessment process is the involvement of entire faculty rather than a committee of few faculty members responsible for whole assessment process (Helmick & Gannod, 2009) , (Sundararajan, 2014) .

Assessment plan and continuous improvement process with faculty involvement at James Madison University are presented in (Pierrakos & Watson, 2013) . The faculty has been involved in the assessment process through course-level assessment. In the presented assessment framework the faculty prepares, the course-level continuous improvement involves Course Assessment and Continuous Improvement (CACI) reports, which serve as direct assessments of course outcomes and student outcomes. At the Program-level, the Assessment Committee members prepare the Student Outcome Summary Reports (SOSR) for program-level continuous improvement. In the presented assessment framework, the faculty role is restricted to the course-level assessment. Fu et al. (Fu et al., 2014) describes assessment process as most difficult and time-consuming process because it requires evaluation of SOs attainment and the application of results on program continuous improvement. Authors have presented the assessment structure used for initial and comprehensive accreditation of CS program at University of Oklahoma. Despite of established program and selective faculty, they find that the revision and the standardization are required for courses’ syllabi so that cross-section consistency will be achieved while different instructors teach the same course. An outcome-based Hierarchical Quantitative Assessment (HQA) scheme is presented in (Reed & Zhang, 2013) . In this work, authors add an intermediate layer of generalize curriculum

Figure 1. Faculty diversity in college of computers and information technology.

outcomes (GCO) between course learning outcomes and program outcomes. They have only described the course-level assessment without going to program-level detail. In (Al-Attar & Abu-Jdayil, 2010) , assessment process and assessment tools are briefly explained without considering the inherent diversity in evaluated data coming from faculty of different educational backgrounds. Harmanani (Harmanani, 2013) has presented a bottom-up outcome-based learning assessment process for outcome based assessment that has led to the accreditation of the computer science program at the Lebanese American University. Role of non-tenure-track faculty (NTTF) in student learning outcomes assessment has been investigated in (Kezar & Maxey, 2014) under the Delphi project at University of Southern California. It has been concluded that, NTTFs are an extremely diverse group. They possess rich practitioner knowledge gained through years of work in educational and professional fields in various cultural and social environments. These instructors may bring different perspectives and can help to develop student learning goals aligned with the challenges and opportunities students will face in their practical lives. This study is more about the policies at higher management level to make the NTTF part of main stream in educational institutes and does not provide the design implementation of the assessment process in diverse faculty environment.

This paper describes assessment process designed and implemented for ABET (Accreditation Board for Engineering and Technology) and NCAAA (The National Commission for Academic Accreditation & Assessment) accreditations to meet the continuous improvement criteria at the College of Computers and Information Technology, Taif University which is a real example of educational institute with diverse faculty. ABET is an international accreditation body while NCAAA is national accreditation body in Saudi Arabia.

2. Assessment Objectives and Outcomes

At a higher level success of a program depends on the performance of its alumnus in the real world. Program educational objectives (PEO) are the expectations and success level that graduates should achieve within 3 to 5 years of graduation. Table 1 shows PEOs for the program and last column shows source of data for computing achievement levels of each objective.

Student Outcomes (SO) are the skills that students should have acquired during the course work by the time of graduation. Each course partially contributes in building these skills. Table 2 shows SOs used for the program assessment.

Course Learning Outcomes (CLO) is micro level skills that a student should acquire in a course. Skill acquired in a CLO partially contributes to an SO. Course designer is responsible for establishing contribution relationship between a CLO and an SO. Table 3 shows CLOs for the first programming course.

Overall program assessment is computed by combining results of assessment from direct and indirect approaches based on scheme using weights for each of the approach. In direct approach, every course offered in the program has well defined course learning outcomes. CLO attainment and achievement metrics for a course are computed from students’ performance in the course. SO attainment and achievement is derived from CLO attainment and achievement. Quantitative values of program level SOs and PEOs are derived from the course level computed values for SOs. Indirect assessment approach is based on computing program level SOs and PEOs from qualitative data collected from surveys and comments from stakeholders. Figure 2 shows relationships of outcomes and assessment tools used in direct and indirect assessment.

Table 1. Program educational objectives (PEO) for CS program.

Table 2. Student outcomes (SO) from ABET adopted for CS program.

Table 3. Course learning outcomes (CLO) for first course in programming.

Figure 2. Components of direct and indirect assessment approaches.

Tools used for direct assessment includes homework, assignments, quizzes, projects, research papers, midterm and final exams. Quantitative values for course learning outcomes are computed from students’ performance in these assessments and values of student outcomes derived from the CLO values.

Course designers are required to establish a relationship between course topics, CLOs and SOs for all the courses in the program as shown in Figure 3. Each course has topics-CLO mapping that shows course topic’s contribution in developing skill defined in CLO. Course instructor uses this mapping in assessment design. If a question tests student’s competence in a topic then the question maps to the CLO that map to the topic. Every question in course assessments maps to one or more than one CLOs.

Each course has CLO-SO mapping that reflect the course designer’s perception on how a skill learned in a CLO contributes to the skill defined by an SO. This mapping is used to derive quantitative value of SO achieve- ments from CLO achievement values. Multiple CLOs from different courses collectively define the achievement levels of a single SO.

Tools used in indirect assessment are: course exit survey, students’ presentations, graduate exit survey, alumni survey, expert reviews and comments from advisory board. To compute quantitative values for SOs and PEOs, these qualitative observations needed to be mapped onto quantitative values.

3. Course Design and Management

Direct assessment approach is dependent on the quality of courses in a program. A program that has homogeneous faculty as all faculty members subscribes to the same standards and same philosophy in teaching and assessment. This fact helps is implementing limited control on the course and giving instructors more flexibility in designing and delivering courses. If faculty members of a program have different academic background and subscribe to different teaching standards and philosophies then course contents, assessment tools and teaching approaches has to converge and standardize to have meaningful assessment of the program. Following subsections discuss the course components and explain a process to design a course and engaging all faculty members to develop a common standard.

3.1. Components of Course Syllabus

Study plan for CS program enumerate list of the courses that a student have to complete to graduate from the program. Well designed course have clear and well defined objectives and outcomes. Figure 4 shows different components of a course. Information in first box from left is part of the catalog and cannot be changed as it has an impact on the student study plan. Course components in the second box are related to the objective and outcomes of the course. These components play a significant role and contribute to the overall program assessment. Items in the third box are related the actual course content and how these contents contribute to the outcome.

Figure 3. Computation of CLO and SO attainments and achievements from course assessments and evidences.

Figure 4. Components of a course syllabus. Well defined topics-CLO and CLO-SO mapping are crucial for quantitative assessment.

Items in the right mostbox are policies and tools used to determine students’ success or failure in addition to the evaluation of CLOs and SOs.

In an established and mature program, an instructor is not allowed to change catalog information of the course but he is allowed to design remaining components to deliver a high quality course. In a diverse program with faculty from different academics background, all these components are designed collaboratively and instructor does not have an authority to change any component except the course assessments tools and grading criteria. A diverse but mature program may have restriction somewhere in between the above mentioned restrictions.

3.2. Design and Change Process

A well designed course requires meaningful contribution from faculty members to ensure that course learning outcomes are well defined, concise and measurable. If CLOs are not well defined, too many or too few, then computed assessment values will not be meaningful resulting in faulty assessment process.

3.2.1. Entities and Responsibilities

Design of a new course requires a clear understanding of course objectives and learning outcomes and how the outcomes of the course contribute to the student outcomes at program level that ultimately contributes the program educational objectives as shown in Figure 2. To achieve this objective a process was designed and implemented as shown in Figure 5. This process is designed to ensure that all faculty members are involved in a meaningful way.

Main entities in the process are course designer or instructor, course coordinator, subject experts, department curriculum committee and department council. Course designer or instructor is responsible to work on the details of the course and produce all required components as shown in Figure 4 in the form of course syllabus. Course coordinator is a faculty member who is an area specialist and is responsible to maintain and manage this course. Subject experts can be a member of faculty or may be from other institutions. They are responsible to review the proposed course and based on their expertise provide a feedback on all aspects of the course. Department curriculum committee is responsible for all courses in the study plan to ensure that there are no holes and there is no overlapping with reference course contents, CLOs, and CLO-So mapping. All faculty members are part of the department council and it is the only body authorizes to approve any change in the program.

3.2.2. Process Flow

A faculty member who is planning to teach a course starts with a course syllabus that has all components as shown in Figure 4. If an approved syllabus already exists and faculty member wants to make some change or he wants to propose a new course then he should submit a proposed syllabus along with course change form to the department curriculum committee.

Figure 5. Process used for design of a new course and course changes.

Curriculum committee ensures that all required information is complete to proceed with further processing. If request is complete then request is forwarded to the subject expert for the comments and feedback along with information about the course coordinator.

Subject experts after completing their review send the original documentation to the course coordinator along with their comments and feedback.

After receiving comments and feedback, course coordinator compiles a course syllabus that incorporates the comments and concerns from the reviewers. Coordinator can consult the proposing faculty member or the reviewers during this process. If differences are irreconcilable then he adds his finding and sends all material to the curriculum committee.

The committee reviews the whole file that has original proposed syllabus, comments from the reviewers, and updated syllabus or comments from the coordinator. The committee prepares its own recommendations for the department council.

The council that has all faculty members from the department looks at the recommendations and decides if the course is approved, rejected or need further review and sends its decision to the department curriculum committee. The committee either notifies the proposing faculty member about the council’s decision or sends the proposal to the course coordinator for further review.

4. Assessment Process

Program assessment process starts with well designed courses with clear and well defined outcomes. Course instructors are supposed to prepare and deliver courses, and collect evidences for the course learning outcomes. Programs that have faculty members from diverse academic backgrounds require extra steps as compared to programs that have homogeneous tenured faculty. Figure 6 shows components of assessment process implemented and following sections explain the entities, responsibilities, rationale and tasks to complete the process.

4.1. Entities and Responsibilities

Students and course instructors are the main entities in any academic process and they act as partners in learning

Figure 6. Task performed and flow of information among tasks of assessment process.

and assessment (Boud, 2010) . Instructors deliver the course to develop skills in students according the course learning outcomes. Instructors conduct students’ assessments to learn how students are performing in their courses. Each assessment has questions to test the students’ skill level according the course learning outcomes.

Program Assessment Unit (PAU) is an entity that is responsible to collect course related data and evidences of outcomes. PAU evaluates the collected data to compute grade distribution, CLO attainment, CLO achievements, SO attainment and SO achievement for all sections, all courses and aggregated values for the program itself.

Course instructors are required to use the assessment done by PAU and do their own self assessment for the course. Course coordinators are responsible to review the course assessment, instructor’s self assessment and collected data to provide their assessment that may involve recommendation to update a course. Department curriculum committee and PAU review the coordinator assessment reports for all courses and make recommendations. Department council is responsible for discussing the recommendations and make recommendations for continuous improvements.

4.2. Assessment Tasks

Program level assessment is performed to find out the issue, problems and areas to improve for continuous improvement. This section discusses and explains the different tasks that need to be performed.

4.2.1. Verification of Approved Course Syllabuses

Diversity in faculty academic background, training and qualification requires that all sections of a course are coordinated and consistent with each other. To ensure that all course components as shown Figure 4 are required to be approved by the department. As a policy matter, course instructors are not allowed to change any course component except assessments such as quizzes, homework, midterm and final exams. Violation of this policy has serious impact on quality of assessment and it make impossible to compare two section of the same course offered in the same semester by two different faculty members. Commitment by course instructors, at the start of their course, to teach according to the approved course syllabus helps in implementation of this policy.

4.2.2. Assessment Design and Mapping of Questions to CLOs

Objective of each course is to teach some skills which are defined as course learning outcomes. Course assessments such as quizzes, homework, and exam are meant to verify if the course objectives are met. Assessment question is a basic unit that is meant to test a specific skill. A question that is designed to test a specific skill maps to a single CLO of the course. For question that is meant to test overall understanding of a course contents does not map to a single CLO which pollute the assessment result. To ensure that there is a clear link between questions and CLOs, a policy of one-to-one mapping between a question and a CLO was introduced. To accommodate general question that cannot be mapped to a single CLO, an exception of 25% was granted.

4.2.3. Data Collection for Delivered Courses

What to collect? A check list is created for instructors to ensure that all required data is available for evaluation and computation of CLOs, SOs and PEOs. Data collected for each course includes, all course material that was used in teaching the course including the lab material, sample assessments and evidences from student work that demonstrate the acquired skilled defined in CLOs and SOs, and detailed question-wise data for all students in midterm and final exams. Instructors can submit detailed question-wise data for other assessments such as quizzes, homework and lab work. Instructors are also required to conduct course survey to learn about students’ perception of CLO coverage in the course. Filled-in survey forms also collected for each section of the course.

4.2.4. Verification of Integrity and Accuracy of Collected Data

This task could be optional or not even required for established programs. But for newer programs with faculty members with diverse academic background, this step is very important for training reasons and to make sure that all faculty members understand the need of quality data for assessment. Incomplete, partial or incorrect data can produce assessment results that are misleading. First step in this task is to ensure that all required data submitted for each section of the course. A checklist is used to keep track of the collected data. In second stage of this task, a manual audit is done to ensure that for each section submitted data items are consistent with each other. In case of any discrepancy, concerned instructor is required to fix the issue and produce consistent data. During the final stage all collected data is digitized for further processing and reviews.

4.2.5. Data Evaluation Process

Previously mentioned tasks were to ensure that there are established standards for course contents, teaching and data collection to have meaningful evaluation. This task is meant to compute following metrics from each course section.

Ÿ Grade Distribution.

Ÿ CLO Attainment: It is the percentage attainment of CLOs derived from the average marks obtained divided by total marks for each question that maps to the CLO.

Ÿ CLO Achievement: Percentage of students that met the expectation or target where target = minimum (average obtained marks, 70% of max marks) for each question that maps to the CLO.

Ÿ SO Attainment: It is derived from CLO attainment using CLO-SO mapping in course syllabus.

Ÿ SO Achievement: Percentage of Students that met the expectation or target. To calculate Student Achievement, we are using following formula: Target = minimum (Average, 70% of maximum marks).

Ÿ Students’ Perception of CLO Coverage.

Same metrics are also computed for courses that have multiple sections to compare all sections in the same course.

4.2.6. Instructors’ Self Assessment

Instructors for all course sections are required to complete a self assessment form using the evaluation results for their course sections. This form is designed to get instructor’s view on students’ performance in his section compared with other sections in the same course using computed values of assessment metrics for his section of the course and the course itself. Instructor is also encouraged to comment on all other aspects of the course such as quality of students, language proficiency, teaching environment and computing resources.

4.2.7. Coordinator’s Review and Assessment

Each course is assigned a faculty member as coordinator. He may or may not be teaching one of the sections for this course. Coordinator is expected to review the course material, assessment results for all sections of the course, and instructor’s self assessment reports of all sections of the course. Major component of coordinator’s review are: high level comments on the course itself, observations based on assessment results and instructor’s self assessment, and recommendations to improve the course in its next offering.

4.2.8. Program Level Reports and Recommendations

Department curriculum committee reviews all coordinator’s reports and other assessment data for all courses and compiles list of recommendations to improve the courses. These recommendation may involve changing a course components, recommending a teaching methodology or request for explanation from an instructor. PAU reviews the same data and look for issues related to the overall program. Program level observations and recommendation are submitted to the department council for review. PAU also prepares program level semester-wise and annual assessment report for all courses in the study plan.

4.2.9. Decisions Taken Based on Assessment Findings

Department council is the decision making authority. Council reviews the recommendations submitted by the curriculum committee related to the courses and recommendations from PAU related to the overall program. Each recommendation is discussed and council takes decision to either approve or reject the recommendation

4.2.10. Decision Implementation

All decision approved by the department council are implemented by the department and college administrations. Quality of implementation and the impact will be reviewed in next cycle.

5. Process Implementation

Quality assurance and continuous improvement that involves all faculty members was the main objective of this process. Program assessment unit (PAU) under the supervision of program assessment committee (PAC) was established to facilitate faculty members, manage collected data, perform course evaluation, and generate assessment reports.

5.1. Tools

PAU was responsible to ensure implementation of course design and assessment process with proper documentation showing meaningful participation of all faculty members. Following tools were used to implement assessment tasks defined in section 4.2.

5.1.1. Assessment Information System (AIS)

This system was designed and implemented in-house as a single source of all information for all participants and roles in the assessment process. The system plays major role in implementation of assessment tasks 1, 2, 3, and 5. It has two components: application and portal. Application provides web-based interface to manage approved courses and data collection for assessment purpose. An evaluation engine computes assessment metrics and generates assessment reports to meet the needs of instructors, reviewers and management. Portal provides course material, approved course syllabuses and system generated assessment reports.

5.1.2. Document Check List

This document was required to establish a standard to deal with academic diversity and used to implement assessment task 3. It lists all possible material that could be useful for course assessment. It establishes a minimum requirement on what must be submitted for each course online as well as offline in-person.

5.1.3. Student Survey Form

This is a system generated customized form that is used to get students feedback on approved CLOs for each course. It is used in assessment tasks 3, 4, and 5.

5.1.4. Student Outcome Form

This is also system generated form that has information on approved CLO-SO mapping. The form is designed to ensure that course instructor is aware of approved CLOs and SOs for the course and can produce direct evidence for SO achievements in the course. This form is used in assessment tasks 3, 6 and 7.

5.1.5. Compliance Follow-Up

PAU staff is responsible for implementation of all assessment tasks. A compliance follow up sheet is used for assessment tasks 1, 3, 6, and 7, to keep track of participation and compliance by relevant faculty members to complete the task.

5.1.6. Instructor Self Assessment Form

Instructors are required to reflect on students’ performance in their sections using system generated assessment reports at section and course levels. This form collects instructors feedback on each of the assessment metrics listed in data evaluation assessment task.

5.1.7. Coordinator Review Form

Course coordinator is required to review instructor reports and assessment reports of all sections of a course and complete a review form. This form has three major components: general observation on current version of approved course, comments and observations on assessment results in different sections of the course, and recommendations to improve the course.

All these tools are available online at: https://db.tt/zYM5UXsa.

5.2. Results

Course level analysis and recommendations process starts after completion of course reviews, to solve problems and issues observed, as shown in Figure 6. Curriculum committee reviewed 23 courses that had been reviewed by course instructors and coordinators and made recommendation for 9 courses.

Ÿ Two courses need complete review as these courses have quality and clarity issues in course syllabus, CLOs and CLO-SO mapping. Submitted material was not enough to support computed assessment results.

Ÿ Four courses need a review of CLOs as either they were not covered in the course assessments or they were not well defined. This resulted in incomplete SO results.

Ÿ Two courses needed better technical support for students to complete their assigned work outside the class.

Program level recommendations resulted from the reviews and assessment processes were:

Ÿ Workshops are needed to develop better understanding and importance of CLOs and SOs by the students and course instructors. This will help in collecting outcome evidences of better quality.

Ÿ A better coordination between course instructors and course coordinators is required. A policy is needed to define interaction between instructor and coordinators. A general pattern was observed where instructors from outside the department did not cover all CLOs and collected evidences were not good enough.

Ÿ Guidelines are needed for mapping between question and CLOs in assessment design. It was observed that instructors mapped multiple CLOs to a single question that resulted in inaccurate assessment values for CLOs and SOs. It was recommended that instructors should have at least question worth 75% of the assessment grade with one-to-one mapping between a question and a CLO.

Ÿ Focus of course instructors is on teaching and ensuring that all CLO are covered. He is not usually concerned with SOs. It was observed that collected evidences for CLOs cannot be used as SO evidence, even though there is a mapping between CLO and SO in the course. Review of CLO-SO mapping is required to ensure that evidence collected for a CLO can be used as evidence for an SO, if there is a mapping between the CLO and SO.

6. Conclusion

Diversity in academic background, qualifications and training requires extra efforts to establish commonly agreed standards that may not be required in a homogenous academic environment. Issues, approaches and solutions suggested are applicable, and it is an academic institution with a diverse faculty that is looking for acquiring and maintaining national or international accreditation. This paper presents a complete assessment process with tools to implement the process. This process is used to gain accreditation from ABET and is being used to get an accreditation from NCAAA, a national accreditation body.

References

  1. Al-Attar, H., & Basim, A.-J. (2010). Outcome Assessments of Petroleum Engineering Program at United Arab Emirates University: A Systematic Approach. Advances in Sustainable Petroleum Engineering Science, 2, 47-68.
  2. Boud, D. (2010). Assessment 2020: Seven Propositions for Assessment Reform in Higher Education. Sydney: Australian Learning and Teaching Council. www.assessmentfutures.com
  3. Crawley, E. F., Malmqvist, J., Östlund, S., Brodeur, D. R., & Edström, K. (2014). Rethinking Engineering Education. Cham: Springer International Publishing. http://link.springer.com/10.1007/978-3-319-05561-9
  4. Fu, J. C., Gourley, M., Park, M.-A., Qian, G., Sung, H., & Turner, T. (2014). Obtaining and Maintaining ABET Accreditation: An Experience-Based Review of the ABET Criteria for Computer Science Programs. Journal of Computing Sciences in Colleges, 29, 13-19.
  5. Harmanani, H. M. (2013). A Bottom-Up Outcome-Based Learning Assessment Process for Accrediting Computing Programs. Proceedings of the International Conference on Frontiers in Education: Computer Science and Computer Engineering. http://www.researchgate.net/profile/Haidar_Harmanani/publication/258452571_A_Bottom-Up_Outcome-Based_Learning_Assessment_Process_for_Accrediting_Computing_Programs/links/02e7e5284cb02e1555000000.pdf
  6. Helmick, M. T., & Gannod, G. C. (2009). Streamlining and Integration of Miami Three-Tier Outcomes Assessment for Sustainability. 39th IEEE Frontiers in Education Conference, FIE ’09, 1-6. http://dx.doi.org/10.1109/FIE.2009.5350473
  7. Kezar, A., & Maxey, D. (2014). Student Outcomes Assessment among the New Non-Tenure-Track Faculty Majority. Occasional Paper, No. 21. http://www.learningoutcomeassessment.org/documents/OP218-29-14.pdf
  8. Kuh, G. D., Jankowski, N., Ikenberry, S. O., & Kinzie, J. (2014). Knowing What Students Know and Can Do: The Current State of Student Learning Outcomes Assessment in US Colleges and Universities. National Institute for Learning Outcomes Assessment. www.learningoutcomesassessment.org
  9. Pierrakos, O., & Watson, H. (2013). A Comprehensive ABET-Focused Assessment Plan Designed to Involve All Program Faculty. Frontiers in Education Conference, 2013 IEEE, 1716-1722. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6685131
  10. Reed, J., & Zhang. H. (2013). A Hierarchical Framework for Mapping and Quantitatively Assessing Program and Learning Outcomes. Proceedings of the 18th ACM Conference on Innovation and Technology in Computer Science Education, 52- 57. http://dl.acm.org/citation.cfm?id=2462481
  11. Sundararajan, S. (2014). A Strategy for Sustainable Student Outcomes Assessment for a Mechanical Engineering Program That Maximizes Faculty Engagement. Mechanical Engineering Conference Presentations, Papers, and Proceedings. http://lib.dr.iastate.edu/me_conf/54