
K. M. CRAMER ET AL.
that initially assesses their current understanding of the material.
The pretest consists of 20 - 25 multiple choice questions based
on the chapter contents, and renders students a pretest score
based on their responses. Students can review the items, with
correct answers revealed for any mistakes. MyPsychLab then
creates a unique study plan, specific to the areas of needed im-
provement based on that student’s pretest profile (i.e., brain
anatomy modules such as flashcards, quizzes, and demonstra-
tions may be included if many or all of these pretests items
were answered incorrectly). Specific segments of the textbook
are also included in order that students can locate and review
the necessary materials. After study plan has been reviewed,
students may proceed to the posttest, consisting of 20 - 25 ques-
tions (different from those in the pretest, but common for all
students). A summary score is derived, and students can review
individual items; however, correct answers are not in this case
revealed. Students can return to the study plan for further re-
view, and attempt the same posttest to improve their score.
MyPsychLab will record (in an instructor’s gradebook) stu-
dents’ last posttest mark prior to the posted deadline. In this
way, students can receive continual feedback about their learn-
ing performance, and more easily ide ntify areas of weakness.
Instructors often include this type of tool as a learning re-
source, assuming it should predict performance on both tests
and assignments. Previous research has supported this notion.
For example, Wininger (2005) observed an increase of 10% on
final exam scores among students who received a form of for-
mative assessment after a midterm exam. Similarly, Buchanan
(2000) observed higher performance on final exams among
students who made use of PsyCAL web-based software. How-
ever, the empirical evidence is not entirely positive regarding
the use of formative feedback and web-based tool. In their
study of formative assessment, Gijbels and Dochy (2006) ob-
served a shift to more surface-level thinking and learning prac-
tices among students who received weekly feedback from an
instructor about their progress in the course of learning new
concepts. Likewise, researchers reported no significant relation
between performance on the final examination and students’
use of MyMat hLa b (Tzufang, 2009). However, since Tzufang’s
research utilized a Math-based curriculum (as opposed to a
subject with numerous domains within a myriad of theoretical
frameworks), there is reason to believe the software may have
been ineffective because of limited opportunities for feedback
and assistance.
Despite the various challenges observed in the use of forma-
tive assessment, we assumed in the present study that the use of
an online tool such as MyPsychLa b can provide students with
the kind of practice, reinforcement, and feedback that would
prove helpful in mastering introductory level material. Using an
especially large sample of students and a variety of means to
assess student mastery and performance, it was hypothesized
that MyPsychLab would both correlate with and render ac-
ceptable reliability coefficients among other performance mea-
sures.
Method
Participants and Measures
The present sample consisted of 1251 (851 females, 68%)
students at the University of Windsor in Southwestern Ontario,
Canada; enrolled in an introductory psychology course for the
January 2009 semester. The course utilized six media toward
the derivation of student performance in the course: 1) a 120-
item multiple choice test (worth 35% of the course grade) based
on the first 4 chapters covered in the semester (child develop-
ment, adulthood and aging, cognition, and intelligence); 2) a
120-item multiple choice comprehensive final examination
(40%), which included past material covered in the midterm but
emphasized recent coverage of social and applied psychology,
personality, psychopathology, and treatment of mental disor-
ders; 3) MyPsychLab (10%) based on the completion of 9 chap-
ter post-tests, one of which (the applied psychology chapter)
was given double the weight to yield a score out of 10; 4) a
peer-review assignment (10%), wherein students complete a 1 -
2 pages written assignment graded by fellow students; 5) a class
participation mark using electronic voting devices or clickers
(5%). Whereas these five measures yielded 100% of the grade,
students had the opportunity ; 6) to earn three bonus marks in the
course through participation in ongoing psychology research.
Results
Table 1 shows the intercorrelations (all significant at p < .05,
df = 1249) among the six measures. Using Pearson product
moment correlations, students’ MyPsychLab correlations to the
other measures ranged from .299 (course midterm) to .488
(clicker). A reliability analysis, using standardized transforma-
tions of the six measures, showed a Cronbach’s alpha coeffi-
cient of .76, with moderate item-total correlations (see Table 1)
for each measure, suggesting they each contributed to meas-
urement of the construct (though arguably less so for the re-
search bonus marks).
Given the large number of participants, the data were ana-
lyzed using each of two data reduction methods. First, we util-
ized a principal components analysis, wherein the relative con-
tribution of each measure was assumed to be equal (with di-
agonal communalities set to unity). This method extracted one
component (rendering rotation unfeasible), explaining 46% of
the shared variance. Examination of the component matrix (see
Table 1) showed that MyPsychLab was the highest contributor
to the single latent component (.734). Secondly, we utilized a
principal axis factor analysis wherein the relative contribution
of each measure was not assumed to be equal (with diagonal
entries set to prior communality estimates). This method also
extracted one factor, explaining 35% of the shared variance.
Examination of the factor matrix showed that MyPsychLab was
the highest contributor to the single latent factor (.667).
Discussion
Overall, the present study supported the construct validity of
the MyPsychLab resource in four ways. To begin, students’
MyPsychLab scores were correlated (mildly to moderately)
with each of the other five measures of student performance.
Secondly, each of the six measures contributed to a general
construct as demonstrated by a cohesive reliability coefficient.
In other words, the items measured a similar construct. The
most convincing evidence however was obtained using both
principal component and principal axis factoring, which ren-
dered comparable results: a unitary construct emerged within
the data, with the highest contribution offered from MyP-
sychLab, and smaller (but still substantial) contributions of-
fered from other measures of student performance. Future studies
Copyright © 2012 SciRe s .
294