Open Access Library Journal
Vol.06 No.02(2019), Article ID:90865,22 pages
10.4236/oalib.1105242

Item Response Theory Modeling of High School Students’ Behavior in a High-Stakes Exam

Helen Gomes1, Raul Matsushita1, Sergio Da Silva2*

1Department of Statistics, University of Brasilia, Brasilia, Brazil

2Department of Economics, Federal University of Santa Catarina, Florianopolis, Brazil

Copyright © 2019 by author(s) and Open Access Library Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: February 21, 2019; Accepted: February 25, 2019; Published: February 28, 2019

ABSTRACT

We put forward a model based on item response theory that highlights the role of latent features called “proficiency” and “propensity”. The model is adjusted to data from the decisions made in a high-stakes exam taken by 10,822 Brazilian high school students. Our model aims to recover information regarding the role the latent features (proficiency and propensity) play in a decision. We find that the decision of responding or not and also the decision of responding correctly or not in a group of items can be described by a two-dimensional logistic model, even if there are imperfections from an item-by-item adjustment. Not only proficiency, but also refraining from responding is found to depend on both the characteristics of the items and the latent features of the students. In particular, the least proficient students prefer to leave an item blank, rather than respond it incorrectly. There is a negative linear correlation between scoring in the exam and propensity, and scoring and proficiency are positively correlated although nonlinear.

Subject Area

Statistics

Keywords:

Psychometrics, Item Response Theory, Student Behavior, High-Stakes Exams

1. Introduction

Admissions to higher education institutions in Brazil are traditionally made through an entrance exam called the “vestibular”. However, since 1995 a three-stage evaluation process has become adopted by some universities as an alternative to the vestibular. Here, we consider one such an evaluation taken by the University of Brasilia called the “Serial Evaluation Program”, or PAS. In particular, we take recently publicly available data for the third stage of PAS for the years 2006 to 2008 and focus on the exam given on 7 December 2008 for 10,822 last-year high school participants.

The third stage of PAS’s exam involves two sections: 1) a foreign language section; 2) a general knowledge section that considers Portuguese, math, physics, biology, chemistry, the arts, philosophy, geography, history, literature and sociology. Here, we concentrate on the second section and its true or false questions composed of 100 items. The dataset is available at Figshare (https://doi.org/10.6084/m9.figshare.5882377.v1).

This is a high-stakes environment for the applicants [1] [2] [3] because the exam payoff means entering a top university. Under such circumstances, participants are expected to behave strategically [4] .

This work considers item response theory [5] to model the participants’ behavior. In psychometrics, item response theory (IRT) constitutes a set of methodologies that allow the estimation of intangible individual characteristics (or latent features), such as intelligence, personality traits, emotional states, proficiency and risk taking [5] .

In particular, we postulate here the probability of a high-school participant to correctly answer an item on the exam depends on both the intrinsic characteristic of such an item, such as its degree of difficulty, and the participant’s proficiency on the subject the item refers to. Acting strategically, the participant may also either provide an incorrect response or leave the question blank. Here, leaving the question blank is strategically better because answering incorrectly is a loss. When facing difficult questions, we also assume the participant makes a decision taking into account both intrinsic difficulty and intuitive latent features that we call “propensity”.

Leaving a question blank may also mean a participant’s low proficiency regarding the item as well as the propensity to avoid a loss accruing from answering incorrectly. Our model aims to recover information regarding the roles the latent features of proficiency and propensity play in a decision.

Section 2 introduces a model of proficiency and propensity based on item response theory. Section 3 analyzes the data using the model and shows the results found. And Section 4 concludes the study.

2. An IRT Model of Proficiency and Propensity

Consider a group of n participants who take part in an exam made up of I items. Let R i j be a dummy variable that takes on value 1 if individual j does answer item i (where 1 j n ; 1 i I ), or takes on value 0 if individual j does not answer item i. In addition, let U i j be another dummy such that U i j = 1 if item i is correctly answered by individual j, and U i j = 0 if item i is not correctly answered or left blank by individual j.

Figure 1 displays the possible paths for result U i j . First, individual j decides whether or not to answer item i. If individual j decides to answer, his or her answer may end up correct or incorrect. If he or she decides not to answer, then U i j = 0 . Thus, if R i j = 0 then U i j = 0 with probability 1.

Table 1 shows the joint probability distribution of variables R i j and U i j . Their conditional probabilities are:

π 00 = P [ R i j = 0 , U i j = 0 ] = P [ R i j = 0 | U i j = 0 ] P [ U i j = 0 ] (1)

π 01 = P [ R i j = 1 , U i j = 0 ] = P [ R i j = 1 | U i j = 0 ] P [ U i j = 0 ] (2)

π 11 = P [ R i j = 1 , U i j = 1 ] = P [ R i j = 1 | U i j = 1 ] P [ U i j = 1 ] = P [ U i j = 1 ] . (3)

The probabilities P in (1), (2) and (3) are obviously related to i and j, but subscripts have been omitted for notational convenience.

Setting 0 0 0 , the joint probability distribution can be written as

P [ R i j = r , U i j = u ] = { P [ R i j = r | U i j = u ] } 1 u P [ U i j = u ] , (4)

where r , u { 0 , 1 } .

The conditional probabilities in (2) and (3) capture the trade-off faced by individual j of either responding to item i incorrectly or leaving the item blank. These two possibilities refer to the event U i j = 0 . However, treating missing data as incorrect is the least desirable way to account for missing-not-at-random responses in large-scale surveys [6] , because a participant tends to leave blank those items he or she considered difficult. To be in control, the participant manages to pick those items that match his or her proficiency. Incorrect answers and nonresponses have the same payoff, but considering nonresponses as the same as incorrect answers bias proficiency estimates [6] [7] .

To remedy this deficiency, we consider the approach initiated by Knott et al.

Figure 1. Possible paths for result U i j .

Table 1. Joint distribution between R i j (j does respond = 1; does not = 0) and U i j (correct = 1; incorrect = 0).

[8] and Albanese and Knott [9] , and followed by many [10] - [16] . We introduce in our model a bivariate latent feature ( θ 1 , θ 2 ) , where θ 1 is what we called propensity in the previous section, and θ 2 is proficiency. Propensity precisely means responses to the items convey information regarding the participants who are more prone to answer incorrectly rather than not to answer. Our strategy of modeling allows us to incorporate nonresponses explicitly into an analysis. In particular, we consider the terms in Equation (4) to be described by two-parameter logistic equations [8] [9] :

P [ R i j = r | U i j = 0 ] = exp [ r a 1 i ( θ 1 j b 1 i ) ] 1 + exp [ a 1 i ( θ 1 j b 1 i ) ] (5)

P [ U i j = u ] = exp [ u a 2 i ( θ 2 j b 2 i ) ] 1 + exp [ a 2 i ( θ 2 j b 2 i ) ] (6)

where a 1 i and a 2 i are parameters related to the discriminating power of a participant, and b 1 i and b 2 i are difficulty parameters related to item i [17] [18] . Here, subscript 1 refers to propensity, while subscript 2 refers to proficiency. The latent feature θ 2 j is the proficiency of participant j, and θ 1 j is the propensity of participant j to answer incorrectly.

Equations (5) and (6) give a precise meaning to the latent features. Propensity is defined exactly by Equation (5), while proficiency is defined by Equation (6). Propensity is related to the conditional probability of an incorrect response against the nonresponse option. Thus, propensity refers to making a mistake by choosing the incorrect response rather than making a mistake by leaving an item blank. Of note, a risk is involved while choosing, and thus risk taking is implicitly considered in propensity.

By considering an item incorrect, a participant 1) may provide an incorrect response or 2) may leave the item blank. A high propensity means the participant picks the former. Because propensity is defined based on a probability conditional to the space of incorrect items, here the correct decision is not to answer.

Propensity and proficiency latent features are usually considered in models of “nonignorable nonresponses” [6] [7] . Here, we consider a two-dimensional IRT model to deal with such nonignorable nonresponses in tests with dichotomous items. While the propensity dimension provides information about omitted behavior, the proficiency dimension is related to a candidate’s ability.

Considering Equations (1)-(3), the latent variables θ 1 j and θ 2 j refer to the logit functions:

η 1 i j = ln π 01 π 00 = a 1 i ( θ 1 j b 1 i ) (7)

η 2 i j = ln π 11 π 00 + π 01 = a 2 i ( θ 2 j b 2 i ) . (8)

Substituting the one-dimensional logistic Equations (5) and (6) into (4) yields the bidimensional model:

P [ R i j = r , U i j = u ] = exp [ r ( 1 u ) ( η 1 i j + u η 2 i j ) ] 1 + exp [ η 2 i j ] + ( 1 u ) exp [ η 1 i j ] { 1 + exp [ η 2 i j ] } . (9)

This IRT model of proficiency and propensity is noncompensatory [18] , in that the low proficiency of participant j in answering an item correctly, θ 2 j , cannot be compensated by his or her propensity, θ 1 j . We estimate the item-related parameters― a 1 i , a 2 i , b 1 i , b 2 i ―by maximum likelihood, whereas the latent features― θ 1 j , θ 2 j ―are estimated by the expected a posteriori method [17] [18] . All the scripts were built using the R language (https://cran.r-project.org/).

In item response theory, the items are usually evaluated taking into account their adhesion to an adjusted model [19] . In particular, for the joint distribution in Table 1 of an item i its chi-square statistics are given by

χ i 2 = n [ ( π ^ 11 , i π ˜ 11 , i ) 2 π ^ 11 , i + ( π ^ 01 , i π ˜ 01 , i ) 2 π ^ 01 , i + ( π ^ 00 , i π ˜ 00 , i ) 2 π ^ 00 , i ] , (10)

where

π ^ 00 , i = 1 π ^ 11 , i π ^ 01 , i ,

π ^ 11 , i = j = 1 n P ^ ( U i j = 1 ) / n ,

π ^ 01 , i = j = 1 n P ^ ( R i j = 1 | U i j = 0 ) P ^ ( U i j = 0 ) / n

are the aggregates of the estimates of the probabilities in model (9), and π ˜ 11 , i , π ˜ 01 , i , π ˜ 00 , i = 1 π ˜ 11 , i π ˜ 01 , i are the corresponding empirical frequencies, that is, the ratio between the number of occurrences and the total number of participants. Under the null hypothesis that the model fits the joint distribution in Table 1, the chi-square statistics (10) have 2 degrees of freedom as they depend on two random variables.

In particular, to assess the similarity between the expected and observed fractions π k k in a set of I items, for either k = 0 (nonresponses) or k = 1 (correct responses), we consider the Pearson correlation measures

ρ k = i = 1 I [ π ^ k k , i m ( π ^ k k , i ) ] [ π ˜ k k , i m ( π ˜ k k , i ) ] i = 1 I [ π ^ k k , i m ( π ^ k k , i ) ] 2 i = 1 I [ π ˜ k k , i m ( π ˜ k k , i ) ] 2 , (11)

where m ( π k k , i ) = i = 1 I π k k , i / I .

Next, we show the analysis of data and the results from model (9).

3. Results

Figure 2(a) shows a funnel-shaped dispersion between the total of unanswered items, T n , and the total of correct responses, T c , for a participant. As T n rises, the variability of T c plummets. Figure 2(b) shows a triangle-shaped dispersion between the total of unanswered items, T n , and the total of incorrect responses, T w . While the distributions of T c and T w are roughly sinusoid, the distribution of T n reveals the concentration of zeros (only 12.5 percent of participants responded all the items). The variability of correct and incorrect responses for the participants who did not leave items blank is high, thus suggesting they are likely to present a larger propensity θ 1 .

Figure 3 shows the percentage of nonresponses for each item considering the four groups of disciplines, as in Table 2. As can be seen in Figure 3, the disciplines in groups II and III of Table 2 show more nonresponses (p-value = 0.0005; Kruskal-Wallis test, d.f. = 3). For this reason, model (9) will consider such a fact.

Regarding the percentage of incorrect responses relative to all incorrect responses for the groups, that is,

P w t = π 01 π 00 + π 01 × 100 , (12)

Figure 4 reveals absence of pattern (p-value = 0.16; Kruskal-Wallis test, d.f. = 3). Figure 5 shows the dispersion of the empirical values of conditional probabilities

Figure 2. (a) Dispersion between the total of unanswered items and the total of correct responses; (b) dispersion between the total of unanswered items and the total of incorrect responses. Their respective marginal distributions are also shown.

Table 2. Groups of disciplines in the exam.

Figure 3. Percentage of nonresponses, by group. Groups II and III show more nonresponses.

Figure 4. The percentage of incorrect responses relative to all incorrect responses for the groups does not have a pattern.

Figure 5. Dispersion of the empirical values of π ˜ 00 and π ˜ 11 .

π 00 and π 11 . Nonresponses π ˜ 00 are expected to drop as correct responses π ˜ 11 increase, as seen, for instance, for items 70, 71, 72, 73, 83 and 84 (highlighted in Figure 5). However, for the other items highlighted, this did not occur and incorrect responses were more common than nonresponses. For instance, item 35 presented only 5.1 percent of nonresponses and 76.3 percent of incorrect responses (and 18.6 percent of correct responses).

Tables 3-6 show the parameter estimates of the items from marginal maximum likelihood. They also show the observed percentage frequencies along with the expected percentage frequencies from our adjusted model. The χ 2 statistics and their corresponding p-values are also shown.

Figure 6 shows the dispersion between the discriminating power parameter and the difficulty parameter regarding propensity, by group. Most responses to the items allow a reasonable discriminating power, that is, a 1 > 1 , and present a

Table 3. Results for Group I.

Table 4. Results for Group II.

low degree of difficulty ( b 1 < 0 ). This result suggests those with lower propensity to respond incorrectly prefer not to respond (that is, they give nonresponses).

Figure 7 shows the dispersion between the discriminating power parameter and the difficulty parameter regarding proficiency, by group. Now, responses to the items allow less power to discriminate the most proficient participants, apart from items 20, 77, 78, 82, 83, 95, 96, 118 and 119, where a 2 > 1 and b 2 > 0 .

To illustrate this, first consider items 83 and 84 of group III, whose responses are “correct”. The parameter estimates for item 83 were a 1 = 2.35 , b 1 = 0.27 , a 2 = 1.02 and b 2 = 0.72 . For item 84, they were a 1 = 2.40 , b 1 = 0.61 , a 2 = 0.71 and b 2 = 2.15 . Thus, as for propensity, responses to the items allow

Table 5. Results for Group III.

for good discriminating power and have positive difficulty parameters. This suggests the responses to the items convey information regarding the participants who are more prone to respond incorrectly rather than not to respond.

Table 7 compares the expected joint percentage distribution of R i j and U i j from our model (9) with its empirical joint distribution (in parentheses). There is poor adjustment to model (9) for items 83 ( χ 2 = 19.15; p-value < 0.001) and 84 ( χ 2 = 63.90; p-value < 0.001), if taken in isolation. However, if considered together with the other items from Group III, items 83 and 84 do not deviate significantly from the expected lines in Figure 9 and Figure 10.

Table 6. Results for Group IV.

As another example, consider items 70-75 from group II, where item 70 is “incorrect” and the remaining are “correct.” Parameter estimates for these items are presented in previous Table 4. As for proficiency, such items have moderate discriminating power ( 0.73 a 2 0.91 ) and positive difficulty ( 0 b 2 1.85 ). Regarding propensity, the items show high discriminating power ( 2.14 a 1 2.58 ) and the difficulty parameters are 0.74 b 1 0.16 .

Table 8 shows the expected joint distributions from model (9) and the empirical

Figure 6. Dispersion between the discriminating power parameter and the difficulty parameter regarding propensity, by group.

Figure 7. Dispersion between the discriminating power parameter and the difficulty parameter regarding proficiency, by group.

Table 7. Joint percentage distribution of R i j (responded = 1; didn’t respond = 0) and U i j (correct = 1; incorrect = 0): Expected from model (9) and empirical (in parentheses). Items 83 and 84.

Table 8. Joint percentage distribution of R i j (responded = 1; didn’t respond = 0) and U i j (correct = 1; incorrect = 0): Expected from model (9) and empirical (in parentheses). Items 70 - 75.

ones. Again, apart from item 70, the items did not appear to adhere to the model ( χ 2 > 13 ; p-value < 0.002). However, both fractions of observed correct responses ( π ˜ 11 ) and observed nonresponses ( π ˜ 00 ) fall near their corresponding expected lines given by model (9) (Figure 9 and Figure 10).

Figure 8 summarizes the χ 2 statistics distances between the observed distributions and those expected from model (9) for the items, by group. Horizontal dashed lines divide those 52 items for which the model is better adjusted (9 items from Group I; 12 from II; 10 from III; and 21 from IV). For all the 52 items beneath the lines, χ 2 < 9.21 with p-values less than 1 percent. However, perhaps apart from the Group I items in Figure 9, there are more than 52 items whose observed frequencies π ˜ 00 and π ˜ 11 are similar to the expected frequencies

Figure 8. χ 2 distances (with d.f. = 2) between the observed distributions and those expected from model (9) for the items, by group. The items whose distances are statistically null fall below the horizontal dashed lines (critical value of 9.21 at the significance level of 1 percent), and thus are well adjusted to model (9).

from the model, π ^ 00 and π ^ 11 , with ρ > 0.995 (Figure 9 and Figure 10).

A slightly different picture emerges from the Group I nonresponses (Figure 9), where the fractions of nonresponses, π ˜ 00 , fall above the expected ones, π ^ 00 .

Figure 11 shows the joint distribution between θ 1 and θ 2 , by group. It suggests the existence of at least two types of participants. The clusters of dots at the top refer to the participants who do not leave items blank ( T n = 0 ). Overall, less proficient participants (low θ 2 ) are less likely to respond incorrectly and then leave an item blank (low θ 1 ). However, proficiency changes over after a threshold and then propensity tends to the modal region of the distributions.

Figure 9. Dispersion between the observed π ˜ 00 and the expected π ^ 00 frequencies of nonresponses, by group (corresponding Pearson correlation coefficients ρ in parentheses).

Figure 12 and Figure 13 show the relationship between score, S, and the latent variables θ 1 and θ 2 , by group. Score and propensity present low negative linear correlation (Figure 12). Solid lines show conditional mean values S | θ 1 that are adjusted nonparametrically using the LOESS method. From a threshold (say, θ 1 > 1 ), participants with higher propensities score lower. However, before this threshold is reached, expected scores lie on a plateau around which dispersions are funnel shaped. Moreover, and as expected, participants who are more proficient tend to lie above the solid line.

Figure 13 shows scoring and proficiency are positively correlated and nonlinear. Participants who are more proficient tend to score more and at a higher intensity (slope) than that of those who score lower. As expected, participants with higher propensities tend to lie below the solid line. For a given level of proficiency, θ 2 , participants with higher propensities tend to score lower. However,

Figure 10. Dispersion between the observed π ˜ 11 and the expected π ^ 11 frequencies of correct answers, by group (corresponding Pearson correlation coefficients ρ in parentheses).

as θ 2 rises, the dispersion of S lessens, and this dampens the effect of θ 1 . Yet, as θ 2 is reduced, θ 1 impacts S more, and S | θ 2 tends to flatten.

4. Conclusions

This work considers item response theory to model 10,822 Brazilian high school students’ behavior in a high-stakes exam that may enable them to enter a top university. We put forward a model based on item response theory that highlights the role of latent features that we call “proficiency” and “propensity”.

The key strategic decision of a participant is to either risk an incorrect response or leave the question blank. Leaving the question blank is strategically better, because responding incorrectly is a loss. A participant then decides by taking into account both intrinsic difficulty and the latent feature of propensity.

Leaving a question blank may also reflect the participant’s low proficiency regarding the item as well as the propensity to avoid the loss accruing from responding

Figure 11. Dispersion between the latent features θ 1 and θ 2 , by group. Solid red lines show conditional mean values θ 1 | θ 2 that are adjusted nonparametrically using the LOESS method, and T n shows the total of unanswered items.

incorrectly. Our model aims to recover information regarding the role the latent features―proficiency and propensity―play in a decision.

In the model we set, propensity is defined exactly by Equation (5), while proficiency is defined by Equation (6). Propensity means propensity to respond incorrectly rather than not to respond. And (low) proficiency of responding correctly cannot be compensated by the propensity of responding incorrectly.

We estimate by maximum likelihood (using the language R) the parameters

Figure 12. Dispersion between score S and propensity θ 1 , by group. For each group, the linear correlations between S and θ 1 are, respectively, −0.20, −0.03, −0.17 and −0.14. Solid lines show conditional mean values S | θ 1 that are adjusted nonparametrically using the LOESS method.

Figure 13. Dispersion between score S and proficiency θ 2 , by group. For each group, the correlations between S and θ 2 are, respectively, 0.91, 0.67, 0.60 and 0.76. Solid lines show conditional mean values S | θ 2 that are adjusted nonparametrically using the LOESS method.

related to the discriminating power of a participant and the parameters of difficulty related to an item. Proficiency and propensity are estimated by the expected a posteriori method.

Based on the chi-squared distances, 52 items out of 100 proved to be a good fit to the model. For each group, the overall adhesion of data to our adjusted model was evaluated by the Pearson correlation coefficient ( ρ ). Both responding correctly and propensity showed a strong agreement with the adjusted model (Figure 9 and Figure 10), with ρ > 0.995 .

This suggests the decision of responding or not and also the decision of responding correctly or not in a group of items can be described by a two-dimensional logistic model, even if there are imperfections coming from an item-by-item adjustment.

Refraining from responding is found to depend on both the characteristics of the items and the latent features of the participants. In particular, the least proficient participants prefer to leave an item blank rather than respond it incorrectly.

Scoring on the exam and propensity present a low negative linear correlation. However, scoring and proficiency are positively correlated although nonlinear. Thus, for a given level of proficiency, after a threshold is reached, students with higher propensities score lower.

Acknowledgements

We acknowledge financial support from Cebraspe, CNPq and Capes.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

Cite this paper

Gomes, H., Matsushita, R. and Da Silva, S. (2019) Item Response Theory Modeling of High School Students’ Behavior in a High-Stakes Exam. Open Access Library Journal, 6: e5242. https://doi.org/10.4236/oalib.1105242

References

  1. 1. Klein, S.P. and Hamilton, L. (1999) Large-Scale Testing: Current Practices and New Directions. Rand Education.
    https://www.rand.org/content/dam/rand/pubs/issue_papers/2006/IP182.pdf

  2. 2. Hamilton, L.S., Stecher, B.M. and Klein, S.P. (2002) Making Sense of Test-Based Accountability in Education. Rand Education.
    https://www.rand.org/content/dam/rand/pubs/monograph_reports/2002/MR1554.pdf

  3. 3. Abdelfattah, F.A. (2007) Response Latency Effects on Classical and Item Response Theory Parameters Using Different Scoring Procedures. PhD Thesis, Ohio University, Athens, OH.

  4. 4. Lievens, F., Sackett, P.R. and Buyse, T (2009) The Effects of Response Instructions on Situational Judgment Test Performance and Validity in a High-Stakes Context. Journal of Applied Psychology, 94, 1095-1101. https://doi.org/10.1037/a0014628

  5. 5. Baker, F.B. (2001) The Basics of Item Response Theory. ERIC Clearinghouse on Assessment and Evaluation, College Park, MD.

  6. 6. Rose, N., von Davier, M. and Xu, X. (2010) Modeling Nonignorable Missing Data with Item Response Theory (IRT). Technical Report, ETS, Princeton.

  7. 7. Bertoli-Barsotti, L. and Punzo, A. (2013) Rasch Analysis for Binary Data with Nonignorable Nonresponses, Psicologica, 34, 97-123.

  8. 8. Knott, M., Albanese, M. and Galbraith, J. (1990) Scoring Attitudes to Abortion. The Statistician, 40, 217-223. https://doi.org/10.2307/2348494

  9. 9. Albanese, M. and Knott, M. (1992) TWOMISS: A Computer Program for Fitting a One- or Two-Factor Logit-Probit Latent Variable Model to Binary Data When Observations May Be Missing. LSE Technical Report, London.

  10. 10. Knott, M. and Tzamourani, P. (1997) Fitting a Latent Trait Model for Missing Observations to Racial Prejudice Data. In: Rost, J. and Langeheine, R., Eds., Applications of Latent Trait and Latent Class Models in the Social Sciences, Waxmann, Munster, 244-252.

  11. 11. Bartholomew, D.J., de Menezes, L.M. and Tzamourani, P. (1997) Latent Trait Class of Models Applied to Survey Data. In: In: Rost, J. and Langeheine, R., Eds., Applications of Latent Trait and Latent Class Models in the Social Sciences, Waxmann, Munster, 219-232.

  12. 12. O’Muircheartaigh, C. and Moustaki, I. (1996) Item Non-Response in Attitude Scales: A Latent Variable Approach. Proceedings of the American Statistical Association, Section of Survey Research Methods, 938-943.

  13. 13. O’Muircheartaigh, C. and Moustaki, I. (1999) Symmetric Pattern Models: A Latent Variable Approach to Item Non-Response in Attitude Scales. Journal of the Royal Statistical Society A, 162, 177-194. https://doi.org/10.1111/1467-985X.00129

  14. 14. Moustaki, I. and Knott, M. (2000) Weighting for Item Non-Response in Attitude Scales by Using Latent Variable Models with Covariates. Journal of the Royal Statistical Society A, 163, 445-459. https://doi.org/10.1111/1467-985X.00177

  15. 15. Moustaki, I. and O’Muircheartaigh, C. (2000) A One Dimension Latent Trait Model to Infer Attitude from Nonresponse for Nominal Data, Statistica, 60, 259-276.

  16. 16. Moustaki, I. and O’Muircheartaigh, C. (2002) Locating “Don’t Know”, “No Answer” and Middle Alternatives on an Attitude Scale: A Latent Variable Approach. In: Marcoulides, G.A. and Moustaki, I., Eds., Latent Variable and Latent Structure Models, Lawrence Erlbaum Associates, London, 15-40.

  17. 17. Andrade, D.F. and Tavares, H.R. (2005) Item Response Theory for Longitudinal Data: Population Parameter Estimation. Journal of Multivariate Analysis, 95, 1-22.
    https://doi.org/10.1016/j.jmva.2004.07.005

  18. 18. Reckase, M.D. (2009) Multidimensional Item Response Theory. Springer, New York. https://doi.org/10.1007/978-0-387-89976-3

  19. 19. Hambleton, R.K., Swaminathan, H. and Rogers, H.J. (1991) Fundamentals of Item Response Theory. Sage, London.