Creative Education
2012. Vol.3, No.3, 293-295
Published Online June 2012 in SciRes (http://www.SciRP.org/journal/ce) http://dx.doi.org/10.4236/ce.2012.33046
Copyright © 2012 SciRe s . 293
Making the Grade: Evaluating the Construct Validity of
MyPsychLab as a Measure of Psychology Mastery
Kenneth M. Cramer, Craig Ross, Emily S. Orr, Ann Marcoccia
Department of Psychol o gy, University of Windsor, Windsor, Canad a
Email: KCramer@UWindsor.CA
Received September 2nd, 2011; revised October 18th, 2011; accepted November 2nd, 2011
Publishers of resources for secondary and post-secondary education are becoming more innovative in de-
veloping tools for mastery of the course material. Pearson Education Inc.s MyPsychLab is an example of
such a tool. MyPsychLab consists of online exercises, flashcards, and demonstrations; but also includes
pretest and posttest assessment tools (which can be taken repeatedly until mastery is reached). Given this
tool promotes material mastery, it is expected that MyPsychLab performance would be related to course
performance in more traditional formats. The present study investigated the relation between MyPsychLab,
and five additional means of course assessment in a large sample of students enrolled in an introductory
Psychology course. Results indicated that MyPsychLab was significantly correlated with all other meas-
ures of course performance. Moreover, performance on MyPsychLab was the highest loading item on
both a latent component and factor assessing overall course performance and psychology mastery.
Keywords: Online Learning Environments; MyPsychLab; Material Mastery
Introduction
One of the most significant challenges facing instructors of
large university classrooms is the provision of effective feed-
back to facilitate student learning. Very often, the only feed-
back students receive in these types of classes comes in the
form of summative assessment—a method of testing knowl-
edge at the end of a course or unit that is primarily intended to
determine a student’s grade (Wininger, 2005). From the per-
spective of a student, however, summative assessment feedback
is “too little, too late” because the feedback appears as a final
grade at the conclusion of the semester.
In order to enhance student learning throughout the course,
some instructors attempt to include formative assessment feed-
back in their courses. Unlike summative feedback, formative
feedback is intended to provide information that students can
use to gauge their level of mastery, which can be used to focus
(and perhaps alter) students’ study strategies (Buchanan, 2000).
The rationale for formative feedback, therefore, is to provide
students with feedback on their knowledge or skills while an
opportunity remains to make changes to their study strategies to
broaden and deepen their understanding and mastery of the
course material (Brown & Knight, 1994). In smaller courses,
this kind of feedback can often be provided by the instructor or
teaching assistant. However, in especially large classes (over
250 students), a different approach is needed.
With the proliferation of the Internet and the evolution of
multimedia and informational tools, there has been greater em-
phasis placed on the use of web-based learning resources to
provide opportunities for formative assessment (Wang, Wang,
Wang, & Huang, 2006). Generally, web-based learning tools
for postsecondary education take the form of an online quiz,
followed by feedback and suggested learning resources. Some
programs, such as PsyCAL, do not provide correct answers
(Buchanan, 1999), although many others do. In addition to
offering students the opportunity to receive feedback about
their progress in a course, this type of formative assessment can
also encourage students to develop greater feelings of responsi-
bility and accomplishment in a course, which may foster greater
motivation and preparation (Deeprose & Armitage, 2004).
As a publisher of both secondary and post-secondary educa-
tion materials, Pearson Education Inc. has produced a variety
of pedagogical tools designed to increase student mastery of the
assigned material in various university and college disciplines.
These tools tap a variety of subjects including art and language,
sciences, and social sciences. Of interest in the present study is
the extent to which these tools tap skills relevant to the subject
matter, or whether they are empirically separate from other
more standard methods of measuring student mastery of course
material (e.g., tests and assignments). We will specifically eva-
luate, in the context of a large introductory psychology course,
the construct validity of MyPsychLab.
One especially useful theoretical framework for understand-
ing the benefits of these types of online systems is Vygotsky’s
Zone of Proximal Development, which describes an individual’s
capabilities when facilitated by a more skilled other (Wertsch &
Tulviste, 1992). Traditionally, the “ot her” is a per son who is more
experienced in a particular domain. However, tools like My-
PsychLab can serve as the more skilled other by providing
timely feedback and allowing students to learn from their mis-
takes. In this way, it is possible to conceive an individual’s
cognitive potential as greater than what s/he can directly ex-
press on a traditional multiple-choice or short-answer exam.
Typical of contemporary tools such as MyPsychLab (www.
mypsychlab.com), students enter an online learning environ-
ment that offers the opportunity to review and practice (for each
chapter) their personal mastery and understanding of the course
material. A calendar highlights weekly assignments of a given
module, unit, or chapter, consisting of three parts: pretest, study
plan, and posttest. To begin, students are directed to a pretest
K. M. CRAMER ET AL.
that initially assesses their current understanding of the material.
The pretest consists of 20 - 25 multiple choice questions based
on the chapter contents, and renders students a pretest score
based on their responses. Students can review the items, with
correct answers revealed for any mistakes. MyPsychLab then
creates a unique study plan, specific to the areas of needed im-
provement based on that student’s pretest profile (i.e., brain
anatomy modules such as flashcards, quizzes, and demonstra-
tions may be included if many or all of these pretests items
were answered incorrectly). Specific segments of the textbook
are also included in order that students can locate and review
the necessary materials. After study plan has been reviewed,
students may proceed to the posttest, consisting of 20 - 25 ques-
tions (different from those in the pretest, but common for all
students). A summary score is derived, and students can review
individual items; however, correct answers are not in this case
revealed. Students can return to the study plan for further re-
view, and attempt the same posttest to improve their score.
MyPsychLab will record (in an instructor’s gradebook) stu-
dents’ last posttest mark prior to the posted deadline. In this
way, students can receive continual feedback about their learn-
ing performance, and more easily ide ntify areas of weakness.
Instructors often include this type of tool as a learning re-
source, assuming it should predict performance on both tests
and assignments. Previous research has supported this notion.
For example, Wininger (2005) observed an increase of 10% on
final exam scores among students who received a form of for-
mative assessment after a midterm exam. Similarly, Buchanan
(2000) observed higher performance on final exams among
students who made use of PsyCAL web-based software. How-
ever, the empirical evidence is not entirely positive regarding
the use of formative feedback and web-based tool. In their
study of formative assessment, Gijbels and Dochy (2006) ob-
served a shift to more surface-level thinking and learning prac-
tices among students who received weekly feedback from an
instructor about their progress in the course of learning new
concepts. Likewise, researchers reported no significant relation
between performance on the final examination and students’
use of MyMat hLa b (Tzufang, 2009). However, since Tzufang’s
research utilized a Math-based curriculum (as opposed to a
subject with numerous domains within a myriad of theoretical
frameworks), there is reason to believe the software may have
been ineffective because of limited opportunities for feedback
and assistance.
Despite the various challenges observed in the use of forma-
tive assessment, we assumed in the present study that the use of
an online tool such as MyPsychLa b can provide students with
the kind of practice, reinforcement, and feedback that would
prove helpful in mastering introductory level material. Using an
especially large sample of students and a variety of means to
assess student mastery and performance, it was hypothesized
that MyPsychLab would both correlate with and render ac-
ceptable reliability coefficients among other performance mea-
sures.
Method
Participants and Measures
The present sample consisted of 1251 (851 females, 68%)
students at the University of Windsor in Southwestern Ontario,
Canada; enrolled in an introductory psychology course for the
January 2009 semester. The course utilized six media toward
the derivation of student performance in the course: 1) a 120-
item multiple choice test (worth 35% of the course grade) based
on the first 4 chapters covered in the semester (child develop-
ment, adulthood and aging, cognition, and intelligence); 2) a
120-item multiple choice comprehensive final examination
(40%), which included past material covered in the midterm but
emphasized recent coverage of social and applied psychology,
personality, psychopathology, and treatment of mental disor-
ders; 3) MyPsychLab (10%) based on the completion of 9 chap-
ter post-tests, one of which (the applied psychology chapter)
was given double the weight to yield a score out of 10; 4) a
peer-review assignment (10%), wherein students complete a 1 -
2 pages written assignment graded by fellow students; 5) a class
participation mark using electronic voting devices or clickers
(5%). Whereas these five measures yielded 100% of the grade,
students had the opportunity ; 6) to earn three bonus marks in the
course through participation in ongoing psychology research.
Results
Table 1 shows the intercorrelations (all significant at p < .05,
df = 1249) among the six measures. Using Pearson product
moment correlations, students’ MyPsychLab correlations to the
other measures ranged from .299 (course midterm) to .488
(clicker). A reliability analysis, using standardized transforma-
tions of the six measures, showed a Cronbach’s alpha coeffi-
cient of .76, with moderate item-total correlations (see Table 1)
for each measure, suggesting they each contributed to meas-
urement of the construct (though arguably less so for the re-
search bonus marks).
Given the large number of participants, the data were ana-
lyzed using each of two data reduction methods. First, we util-
ized a principal components analysis, wherein the relative con-
tribution of each measure was assumed to be equal (with di-
agonal communalities set to unity). This method extracted one
component (rendering rotation unfeasible), explaining 46% of
the shared variance. Examination of the component matrix (see
Table 1) showed that MyPsychLab was the highest contributor
to the single latent component (.734). Secondly, we utilized a
principal axis factor analysis wherein the relative contribution
of each measure was not assumed to be equal (with diagonal
entries set to prior communality estimates). This method also
extracted one factor, explaining 35% of the shared variance.
Examination of the factor matrix showed that MyPsychLab was
the highest contributor to the single latent factor (.667).
Discussion
Overall, the present study supported the construct validity of
the MyPsychLab resource in four ways. To begin, students’
MyPsychLab scores were correlated (mildly to moderately)
with each of the other five measures of student performance.
Secondly, each of the six measures contributed to a general
construct as demonstrated by a cohesive reliability coefficient.
In other words, the items measured a similar construct. The
most convincing evidence however was obtained using both
principal component and principal axis factoring, which ren-
dered comparable results: a unitary construct emerged within
the data, with the highest contribution offered from MyP-
sychLab, and smaller (but still substantial) contributions of-
fered from other measures of student performance. Future studies
Copyright © 2012 SciRe s .
294
K. M. CRAMER ET AL.
Copyright © 2012 SciRe s . 295
Table 1.
Measure intercorrelations, item-tota l correlations, and component factor loadings.
MPL Mid Exam Peer Click Bonus Mean SD
MyPsychLab 1.00 7.75 2.79
Midterm .299 1.00 .61 .12
Examination .347 .714 1.00 .65 .14
Peer Review .423 .349 .356 1.00 7.05 2.26
Clicker .488 .278 .285 .358 1.00 3.48 1.57
Bonus .354 .206 .224 .263 .285 1.00 1.60 1.43
Item Total Correlations .56 .54 .57 .51 .49 .37
Component Loadings .734 .706 .663 .688 .723 .540
Factor Loadings .667 .626 .571 .593 .652 .434
Note: All correlations are significant at p < .05.
would do well to conduct comparable studies in other subjects,
across both sciences and languages, composition and mathe-
matics.
REFERENCES
Brown, S., & Knight, P. (1994) . Assessing learners in higher education.
London: Kogan Page.
Buchanan, T. (2000). The efficacy of a World-Wide Web mediated for-
mative assessment. Journal of Computer Assisted Learning, 16, 193-
200. doi:10.1046/j.1365-2729.2000.00132.x
Deeprose, C., & Armitage, C. (2004). Reports: Giving formative feed-
back in higher education. Psychology Learning and Teaching, 4, 43-
46. doi:10.2304/plat.2004.4.1.43
Gijbels, D., & Dochy, F. (2006). Students’ assessment preferences and
approaches to learning: Can formative assessment make a difference?
Educational Studie s, 32, 399-409. doi:10.1080/03055690600850354
Tzufang, H. (2009). The role of task-specific adapted knowledge of re-
sponse feedback in algebra problem solving online homework in a
college remedial course. Doctoral Dissertation, Los Angeles: Univer-
sity of Southern Cal ifornia.
Wang, K. H., Wang, T. H., Wang, W. L., & Huang, S. C. (2006). Learn-
ing styles and formative assessment strategy: Enhancing student achieve-
ment in web-based learning. Journal of Computer Assisted Learning,
22, 207-217. doi:10.1111/j.1365-2729.2006.00166.x
Wertsch, J. V., & Tulviste, P. (1992). L. S. Vygotsky and contemporary
developmental psych ology. Developmental Psychology, 28, 548-557.
doi:10.1037/0012-1649.28.4.548
Wininger, S. R. (2005). Using your tests to teach: Formative assessment.
Teaching of Psychology, 32, 164-166.
doi:10.1207/s15328023top3203_7