iBusiness, 2011, 3, 353-358
doi:10.4236/ib.2011.34047 Published Online December 2011 (http://www.SciRP.org/journal/ib)
Copyright © 2011 SciRes. IB
353
The Usefulness of Global Student Rating Items
under End Program Evaluation Surveys in Quality
Improvements: An Institutional Experience in
Higher Education, Saudi Arabia
Abdullah Al Rubaish
Office of the Presid e n t , Uni v ersity of Dammam, Damma m, Saudi Arabia.
Email: dwivedi7@gmail.com, dwivedi7@hotmail.com
Received September 19th, 2011; revised October 23rd, 2011; accepted November 2nd, 2011.
ABSTRACT
Program evaluation survey (PES) by students in higher education is one of the range of evaluations of academic pro-
grams. Others are course eva l uati o n, teaching skills evaluation; and surveys of facilities and services. The present study
employs the available PES data collected in colleges of Dentistry and Medicine, University of Dammam (UD), Saudi
Arabia. Our PES relates to students experience at the end of their academic program. The present paper analyses
these data and discusses the usefulne ss of global item resu lts vis-a- vis individ ual item resu lts in quality imp rovemen ts of
higher education. The respective percentage of participating students was 100 and 65. The PES results revealed that in
view of poorly graded global items results, there is need of focus on global item results, leading to continuing im-
provements in all the areas covered in the questionnaire.
Keywords: Glo bal It em , Individual Items, Program Evaluation Survey, Academic Program, Higher Education, High
Quality, Acceptable and I mpr ovement Required
1. Introduction
It is mandatory for academic institution s in higher educa-
tion to perform various continuing evaluations of courses
offered, the teaching skills of faculty member s as well as
facilities and services. This is especially the case if the
institution is pursuing accreditation for its academic pro-
grams, or further improvement in quality, or both. The
data generated through these evaluations, if collected
accurately, analyzed appropriately and interpreted cor-
rectly [1-4], produce some of the most important inputs
required in this regard. Furthermore, the utility of such
evidence can be maximized through enhancement of the
awareness and knowledge of users and policy planners [2,
4-5].
In the University of Dammam, academic programs are
currently in phases of developmental review. Each of
these evaluation activities are at peak in five colleges:
namely: Dentistry, Medicine, Nursing, Applied Medical
Sciences and Architecture. All the remaining colleges are
also developing such evaluation practices. A series of
earlier publications have addressed some of the merits
and demerits of such evaluation results [6-19]. Instead of
documented limitations of such surveys, the related re-
sults still remain the backbone of the mandatory inputs
for further quality improvements in higher education
[5-20]. Their innovative uses may meet the varying re-
quirements of the users and policy planners [1,21-22].
Primarily to obtain academic accreditation from the
National Commission for Academic Accreditation &
Assessment (NCAAA), UD focuses on three evaluations
—course evaluation survey (CES), student experience
survey (SES), and, program evaluation survey (PES). A
recent study by Rubaish, Wosornu and Dwivedi [4] used
CES data from a nursing program to describe institu-
tional practice related to students’ global experience at
the end of a course, and its comparative appraisal with
students’ experience related to various aspects of that
course. They also described the utilities of the global
item in deriving policy-oriented clues at three upper lev-
els, namely: semester, year and program.
The present article deals with PES data aiming at
two-fold objectives. First, it describes university practice
The Usefulness of Global Student Rating Items under End Program Evaluation Surveys in Quality Improvements:
354 An Institutional Experience in Higher Education, Saudi Arabia
related to students’ global experience at the end of a pro-
gram, and its comparative appraisal with students’ ex-
perience related to various aspects of that academic pro-
gram. Secondly, it describes its use in deriving pol-
icy-oriented clues in different environments.
The observations on students’ global experience at the
end of a program and its comparative appraisal with stu-
dents’ experience related to various aspects of that aca-
demic program might be helpful to policy planners in
expediting developmental measures [23-25]. Also, its
comparative use in deriving policy-oriented clues in dif-
ferent environments is expected to be equally useful.
Furthermore, from policy point of view, other academic
institutions might also find these observations potentially
useful in expediting quality measures of their own com-
parable academic programs.
There are seven remaining sections in the article. The
information on data collection and methods used in
analysis are described in the Section 2: “Materials and
Methods”. The Section 3, “Results and Discussion”, de-
scribes PES results in two colleges as well as compara-
tive results. Fourth section, “Summary and Conclusions”
points out the issues related to utilities of global item
results vis-a-vis individual items results. The next three
sections are related to limitations, future study, and ac-
knowledgements. Finally, references are listed.
2. Materials and Methods
2.1. Data
The PES data sets were acquired from the two academic
programs, namely, a 12-semester program of Bachelor of
Dental Surgery (BDS), and another 12-semester program
of Bachelor of Medicine & Bachelor of Surgery (MBBS).
The data from BDS [3] were collected on 27 October
2010 from students who completed 12 semesters of this
program, and registered as interns during the academic
year 2010-2011. The same data regarding MBBS were
collected during May-June, 2011, from students who
completed 12 semesters of this program and joined as
interns during the academic year 2010 -2011.
Under BDS, the PES questionnaire was given to each
of the 21 students and could be retrieved from all of them.
However, under MBBS, questionnaire could be given
and collected fr om 65 out o f 100 stud en ts. Thus , fo r BDS,
the response rate was 100%; for MBBS, it was 65%.
Hence, this coverage satisfies a requirement for gener-
alisability of the observed results [26], especially, in re-
spective colleges in UD. The PES questionnaire had a
total of 22 items (Appendix 1), 22nd item being global
item. Each item is a “Likert type item”. The degree of
agreement with a statement was recorded on a five- point
ordinal scale [3].
2.2. Analytical Methods
For item by item analysis of evaluation data on an ordi-
nal scale [27-28], the appropriate methods are the same
as those documented by Rubaish et al. [1] and used by
Rubaish [2-4]. However, to report analytical methods
used on PES data, each of the four measures used in item
by item analysis and respective performance grading
criteria [1] are again reproduced below:
Criteria
Performance
Grading Mean Median
First
Quartile
Cumulative % of
students with score
4 or 5
High Quality 3.6 &
above 4&5 4&5 80 & Above
Acceptable 2.6 - 3.63 3 60 - 80
Improvement
required Less than
2.6 1&2 1&2 Less than 60
3. Results and Discussion
The analytical results related to each item in PES of col-
leges of dentistry and medicine is listed in Table 1. The
successive sections describe the planned observations.
3.1. College of Dentistry
When the mean grading criterion was considered (Table
1), “acceptable” rating was observed in majority of the
items, 14/21 (67%). Maintaining the consistency [4], the
related global item was also rated as “acceptable”. Fur-
ther, the grading of majority of the remaining items was
“improvement required”. When the median performance
grading criterion was considered, it was found that, out
of 21 individual items, “high quality”, “acceptable” and
“improvement required” items converged in 4 (19%), 11
(52%) and 6 (29%) items respectively. Accordingly, the
related global item again remained as “acceptable”.
Instead of earlier target of achieving satisfaction among
at least 50% students through consi deration of m edian, one
may prefer to increase satisfaction level among students
to at least 75% (first quartile). Its related grading crite-
rion lowered the proportion of items with “acceptable” to
29 % (6/21), but it increased those with “improvement
required” to 67% (14/21). As a result of considering the
performance grading criterion based on further increase
in satisfaction level to at least 80%, 19/21 [90%] of the
remaining items need further improvements (Table 1).
Again, consistent with these results, the global item also
changed to “improvement required” in each case.
Copyright © 2011 SciRes. IB
The Usefulness of Global Student Rating Items under End Program Evaluation Surveys in Quality Improvements:
An Institutional Experience in Higher Education, Saudi Arabia
Copyright © 2011 SciRes. IB
355
Table 1. College specific program evaluation survey results.
Dentistry (n = 21) Medicine (n = 65)
Item Mean Median First Quartile % of 4 & 5 Mean Median First Quartile % of 4 & 5
1 3.0 3 2 43 2.8 3 2 21.5
2 3.3 4 3 52 3.0 3 2 30.8
3 2.7 3 2 29 2.7 3 2 13.8
4 2.5 2 2 19 2.2 2 2 9.2
5 3.1 3 3 33 3.7 4 3 69.2
6 2.8 3 2 33 3.0 3 3 24.6
7 2.4 2 2 19 2.7 3 2 13.8
8 4.0 4 4 81 3.8 4 3 69.2
9 3.1 3 3 48 3.2 3 3 32.3
10 2.1 2 1 14 3.1 3 3 36.9
11 3.5 3 3 48 3.3 3 3 47.7
12 2.1 2 1 14 2.6 2 2 15.4
13 1.6 1 1 5 1.7 1 1 6.2
14 2.6 2 2 24 2.8 3 2 29.2
15 3.2 3 3 48 2.9 3 2 25.4
16 3.2 3 2 48 3.3 3 3 40.0
17 3.2 3 2 43 3.1 3 3 32.3
18 3.4 4 3 52 3.3 3 3 46.2
19 3.1 3 2 33 3.3 3 3 43.1
20 2.8 3 2 33 2.9 3 2 26.2
21 3.6 4 3 62 3.2 3 3 41.5
22 3.2 3 2 48
3.1 3 3 30.6
Thus, under such circumstances, it is more meaningful
to rely on global item results, leading to the need of cor-
rective measures on each individual item.
3.2. College of Medicine
Like the College of Dentistry, considering the mean
grading criterion (Table 1), the “acceptable” rating was
observed in majority of the items, 17/21 (81%). Main-
taining the consistency [4], the related global item was
also rated as “acceptable”. Under the median perform-
ance grading criterion, almost all “acceptable” items re-
main unchanged. Accordingly the related global item
again remained as “acceptable”.
Again, instead of earlier target of achieving satisfac-
tion among at least 50% students through consideration
of median, one may prefer to increase satisfaction level
among students to at least 75% (first quartile). Its related
grading criterion increased those with “improvement
required” to 48% (10/21). But, it still had higher propor-
tion of items with “acceptable” 52% (11/21). Hence,
consistent to this result, global item also remained as
“acceptable”.
As a result of considering the performance grading
criterion based on further raising satisfaction level to at
least 80%, 19/21[90%] of the individual items need fur-
ther improvements (Table 1). Again, consistent to this
result, the global item also changed to “improvement
required”.
In summary, under such circumstances, it is more
meaningful to rely on the results in the global item, lead-
ing to the need o f corrective measures on each individual
item.
3.3. Comparative Results
In both colleges, reporting on the global item is consis-
tent with that on individual items. Further, under the mean
The Usefulness of Global Student Rating Items under End Program Evaluation Surveys in Quality Improvements:
356 An Institutional Experience in Higher Education, Saudi Arabia
as well as the median grading criteria, both colleges had
an almost identical pattern of results. However, a com-
paratively lower proportion of individual items in the
college of medicine had grading “improvement required”.
As a result, when the threshold of satisfaction among
students was raised to at least 75%, a higher proportion
of items in college of dentistry reached to grading of
“improvement required”. Also, the global item grading
changed to this level. By contrast, global item grading in
the college of medicine remained as “acceptable”. How-
ever, with further increase in threshold of satisfaction
among students to at least 80%, the global item grading
in the college of medicine also reached to “improvement
required”.
4. Summary and Conclusions
Thus, irrespective of the grading criterion, both colleges
need to focus on the global item results, leading to cor-
rective measures related to almost all individual items.
However, under changing thresholds of satisfaction among
students, both colleges need slightly different corrective
measures. Other institutions having similar environments,
especially those working for quality and academic ac-
creditation in higher education, might also find these
observations useful.
5. Limitations
This study is limited to only two colleges of this univer-
sity with their specific environments. Also, one of the
considered academic programs involves a comparatively
small number of students. To ensure appropriate gener-
alisability of the results, even in similar environments,
programs involving larger number of students would be a
better choice. Accordingly, one needs to take precaution
while generalizing these results.
6. Future Research
Each program as well as college involve varying envi-
ronment [2-3]. Thus, each college requires such evalua-
tions in relation to each of its academic programs [3].
The meaningful clues derived from such evaluations may
be helpful to the policy planners in developing and man-
aging sustainable high quality in higher education. The
feedback from students regarding an academic program
is useful, especially when it is early phase of develop-
ment.
7. Acknowledgements
The author is thankful to Professor Lade Wosornu and
Professor Sada Nand Dwivedi, Deanship of Quality &
Academic Accreditation (DQAA), University of Dam-
mam [UD], for their help in completion of this article. He
is also thankful to the Dean, College of Dentistry; and
Quality Management Officer, Q & P Unit, College of
Dentistry, for cooperation in data collection. Also, help
from Mr. R. Somasundaram, Mr Arun Vijay and Mr C. C.
L. Raymond, DQAA, UD, in this regard is acknowledged.
Further, he is equally thankful to the Dean, College of
Medicine, and Prof EB Larbi, Coordinator, Q & P Unit,
for cooperation in data collection. He is equally thankful
to Mr Sachin Jose, DQAA, UD, for his help in analysis.
He thanks all students for their mature, balanced and
objective response. Finally, Ms Marg Ungson for secre-
tarial assistance.
REFERENCES
[1] A. Al Rubaish, L. Wosornu and S. N. Dwivedi, “Using
Deductions from Assessment Studies towards Further-
ance of the Academic Program: An Empirical Appraisal
of Institutional Student Course Evaluation,” iBusiness,
Vol. 3, No. 2, 2011, pp. 220-228.
[2] A. Al Rubaish, “On the Contribution of Student Experi-
ence Survey Regarding Quality Management in Higher
Education: An Institutional Study in Saudi Arabia,”
Journal of Service Science & Management, Vol. 3, No. 4,
2010, pp. 464-469.
[3] A. Al Rubaish, “A Comparative Appraisal of Timings for
Program Evaluation Survey and Related Institutional Re-
sults in Saudi Arabia: Quality Management in Higher
Education,” Journal of Service Science & Management,
Vol. 4, No. 4, 2011, pp. 184-190.
[4] A. Al Rubaish, L. Wosornu and S. N. Dwivedi, “Ap-
praisal of Using Global Student Rating Items in Quality
Management of Higher Education in Saudi Arabian Uni-
versity,” iBusiness, (not published).
[5] P. Gravestock and E. Gregor-Greenleaf, “Student Course
Evaluations: Research, Models and Trends,” Higher Edu-
cation Quality Council of Ontario, Toronto, 2008.
[6] L. P. Aultman, “An Expected Benefit of Formative Stu-
dent Evaluations,” College Teaching, Vol. 54, No. 3,
2006, pp. 251-285. doi:10.3200/CTCH.54.3.251-285
[7] T. Beran, C. Violato and D. Kline, “What’s the ‘Use’ of
Students Ratings of Instruction for Administrators? One
University’s Experience,” Canadian Journal of Higher
Education, Vol. 35, No.2, 2007, pp. 48-70.
[8] L. A. Braskamp and J. C. Ory , “Assessing Faculty Work:
Enhancing Individual and Institutional Performance,”
Jossey-Ba ss, San Francisco, 1994.
[9] J. P. Campbell and W. C. Bozeman, “The Value of Stu-
dent Ratings: Perceptions of Students, Teachers and Ad-
ministrators,” Community College Journal of Research
and Practice, Vol. 32, No. 1, 2008, pp. 13-24.
doi:10.1080/10668920600864137
[10] W. E. Cashin and R. G. Downey, “Using Global Student
Rating Items for Summative Evaluation,” Journal of Edu-
cational Psychology, Vol. 84, No. 4, 1992, pp. 563-572.
Copyright © 2011 SciRes. IB
The Usefulness of Global Student Rating Items under End Program Evaluation Surveys in Quality Improvements:
An Institutional Experience in Higher Education, Saudi Arabia
Copyright © 2011 SciRes. IB
357
doi:10.1037/0022-0663.84.4.563
[11] M. R. Diamond, “The Usefulness of Structured Mid-
Term Feedback as a Catalyst for Change in Higher Edu-
cation Classes,” Active Learning in Higher Education,
Vol. 5, No. 3, 2004, pp. 217-231.
doi:10.1177/1469787404046845
[12] L. C. Hodges and K. Stanton, “Translating Comments on
Student Evaluations into Language of Learning,” Innova-
tive Higher Education, Vol. 31, No. 5, 2007, pp. 279-286.
doi:10.1007/s10755-006-9027-3
[13] J. W. B. Lang and M. Kersting, “Regular Feedback from
Student Ratings of Instruction: Do College Teachers Im-
prove Their Ratings in the Long Run?” Instructional Sci-
ence, Vol. 35, No. 3, 2007, pp. 187-205.
doi:10.1007/s11251-006-9006-1
[14] H. W. Marsh, “Do University Teachers Become More
Effective with Experience? A Multilevel Growth Model
of Students’ Evaluations of Teaching over 13 Years,”
Journal of Educational Psychology, Vol. 99, No. 4, 2007,
pp. 775-790. doi:10.1037/0022-0663.99.4.775
[15] R. J. Menges, “Shortcomings of Research on Evaluating
and Improving Teaching in Higher Education,” In: K. E.
Ryan, Ed., Evaluating Teaching in Higher Education: A
Vision for the Future, Vol. 83, 2000, pp. 5-11
[16] A. R. Penny and R. Coe, “Effectiveness of Consultations
on Student Ratings Feedback: A Meta-Analysis,” Review
of Educational Research, Vol. 74, No. 2, 2004, pp. 215-
253. doi:10.3102/00346543074002215
[17] R. E. Wright, “Student Evaluations of Faculty: Concerns
Raised in the Literature, and Possible Solutions,” College
Student Journal, Vol. 40, No. 2, 2008, pp. 417-422.
[18] F. Zabaleta, “The Use and Misuse of Student Evaluation
of Teaching,” Teaching in Higher Education, Vol. 12, No.
1, 2007, pp. 55-76. doi:10.1080/13562510601102131
[19] A. S. Aldosary, “Students’ Academic Satisfaction: The
Case of CES at KFUPM,” Journal of King AbdulAziz
University, Vol. 11, No. 1, 1999, pp. 99-107.
[20] M. Yorke, “Student Experience Surveys: Some Meth-
odological Considerations and an Empirical Investiga-
tion,” Assessment & Evaluation in Higher Education, Vol.
34, No. 6, 2009, pp. 721-739.
doi:10.1080/02602930802474219
[21] W. J. McKeachie, “Students Ratings: The Validity of
Use,” American Psychologist, Vol. 51, No. 11, 1997, pp.
1218-1225. doi:10.1037/0003-066X.52.11.1218
[22] M. Theall and J. Franklin, “Looking for Bias in All the
Wrong Places: A Search for Truth or a Witch Hunt in
Student Ratings of Instruction?” In M. Theall, P. C.
Abrami and L. A. Mets, Eds., The Student Ratings Debate:
Are They Valid? How Can We Best Use Them? Vol. 109,
2001, pp. 45-46.
[23] NCAAA, “Handbook of Quality Assurance and Accredi-
tation in Saudi Arabia, Part 2 Internal Quality Assurance
Arrangements,” Monograph, University of Dammam,
September 2007, p. 19.
[24] P. C. Abrami, “Improving Judgements About Teaching
Effectiveness Using Teacher Rating Forms,” John Wiley
& Sons, Inc., New york, 2001.
[25] C. S. Nir and L. Bennet, “Using Student Satisfaction Data
to Start Conversations About Continuous Improvement,”
Quality Approaches in Higher Education, Vol. 2, No. 1.
2011, pp. 17-22.
[26] W. E. Cashin, “Students do Rate Different Academic
Fields Differently,” In M. Theall and J. Franklin, Eds.,
Student Ratings of Instruction: Issues for Improving
Practice, Vol. 43, 1990, pp. 113-121.
[27] R. Gob, C. Mc Collin and M. F. Rmalhoto, “Ordinal
Methodology in the Analysis of Likert Scales,” Qualilty
& Quantity, Vol. 41, No. 5, 2007, pp. 601-626.
doi:10.1007/s11135-007-9089-z
[28] K. R. Sundaram, S. N. Dwivedi and V. Sreenivas, “Medi-
cal Statistics: Principles & Methods,” BI Publications Pvt.
Ltd., New Delhi, 2009.
The Usefulness of Global Student Rating Items under End Program Evaluation Surveys in Quality Improvements:
358 An Institutional Experience in Higher Education, Saudi Arabia
Appendix 1. Program Evaluation Survey
Questionnaire
Items:
1) Adequate academic and career counselling was
available for me throughout the program.
2) The instructors were available for consultation and
advice when I needed to speak with them.
3) The instructors in the program inspired me to do my
best.
4) The instructors in the program gave me helpful
feedback on my work.
5) The instructors in the prog ram had thorough kno wl-
edge of the content of the courses they taught.
6) The instructors were enthusiastic about the pro-
gram.
7) The instructors cared about the progress of their
students.
8) What I have learned in this program will be valu-
able for my future.
9) Study materials in courses were up-to-date and use-
ful.
10) Library resources were adequate and available
when I needed them.
11) Classroom facilities (for lectures, laboratories, tuto-
rials etc) were of good quality.
12) Student computing facilities were sufficient for my
needs.
13) Adequate facilities were available for extracurricu-
lar activities (including sporting and recreational
activities).
14) Adequate facilities were available for religious ob-
servances.
15) Field experience programs (internship, practicum,
cooperative training) were effective in developing
my skills. (Omit this item if not applicable to your
program).
16) As a result of this program I have developed suffi-
cient interest to want to continue to keep up to date
with new developments in my field of study.
17) The program developed my ability to investigate
and solve new problems.
18) The program improved my ability to work effec-
tively in groups.
19) The program improved my skills in communication.
20) I have developed good basic skills in using tech-
nology to investigate issues and communicate re-
sults.
21) I am confident that I have developed the knowledge
and skills required for my chosen career.
22) Overall, I was satisfied with the quality of my
learning experiences at this institution.
Copyright © 2011 SciRes. IB