
M. LOUREL ET AL.
632
Lokou, 1999). whereas there were greater deviations between
the two groups (See Table 1): 33.3% (44.4% - 11.1) for one
(Harris, 1972) compared with 14.2% (42.5 - 28.3) for the others
(op.cit., 1999). Measuring the effect-size merely consists in
translating the deviation and no longer the probability of ob-
taining said devia- tion on the basis of the size of the sample.
Determining Effect-Sizes
The ideal would consist in identifying an indicator that is in-
dependent of the sample size so that the surveys can be com-
pared with each other and, above all, so that we can assess the
effect-size observed. Indeed, sample size sensitivity is one of
the first reasons that led statisticians to work on the concept of
the effect-size.
The contingency coefficient Phi (written
) is one of the in-
dicators used to quantify this significance of deviations between
our two proportions. It is easily applied because we only need
to calculate
2n
(n being the total number of individuals
tested).
Accordingly, if we take the data provided by the two contin-
gency tables above,
equals
72.383280 or 0.15 in the case
of the Guéguen and Fischer-Lokou experiment (1999).
Interpretation
What is the next step once this coefficient
has been calcu-
lated and the effect-size assessed? Cohen (1988) put forward
coefficient values used to measure the effect-size. Three cate-
gories were identified: the low effect (0.1); the medium effect
(.03) and the high effect (0.5)
The effect is qualified as low/medium in the survey [2]. Thus,
therefore, we can be allowed to question the scope of the analy-
sis from the mere statistical significance viewpoint or as an
assessment of the value for ² (the opposite of the importance
obtained).
The Various Effect-Size Indicators
In order to be able to calculate the various indicators used for
measuring effect-size on the basis of the types of variable or of
the analyses carried out, we have separated the headings so that
you can quickly identify the appropriate procedure. We have
used standard indicators for quantifying the effect-size (see
Cohen, 1988 for a more in-depth review). As far as we are
aware, there are many others (e.g. comparison of a mean with a
standard; comparison of 2 means from separate samples; com-
parison of 2 means for linked samples; comparison of frequent-
cies in a contingency table etc.) (see Rosenthal & Rosnow,
1999 for a more in-depth review). In the case of some indica-
tors, it is clear that the effect-size is easily calculated because
the indicator had been produced from earlier analyses (e.g. the
linear correlation coefficient). Furthermore, when the analysis
is performed, most statistics processing software offer options
allowing the user to access these indicators. We should also
note that there are online statistical resources available on the
Internet for calculated these various “effect-sizes” such as:
http://www.uccs.edu/~faculty/lbecker/
http://cognitiveflexibility.org/effectsize/
Conclusions
Measuring effect-size is an approach that the social psycho-
logy researcher must now include, whenever possible, into his
data analysis. This becomes all the more important when exten-
sive samples are used because they encourage the effect to be
revealed even when these are limited. Foreign psychology re-
views and especially the Anglo-Saxon reviews increasingly
tend to demand that these indicators be presented on the same
footing as the various strategic inferential tests used. Some
research used to the meta-analysis computation based an ad-
justed variance and/or upon a pooled variance of effect size.
Berk and Freedman (2003) are skeptical regarding the effect-
tiveness of the meta-analysis. The authors questioned the as-
sumed independence of studies and to randomization for forced
inclusion of studies. It’s a very important problem for scientific
research. Further, the authors are skeptical about the social
dependence (and financial) between the some pool of peer-
review journals and then taken to a subsequent meta-analysis
by the scientific community. For authors: “In the present state
of our science, invoking a formal relationship between random
samples and populations is more likely to obscure than to clar-
ify.”
In this article, we have attempted to present the principle of
this quantification and the way in which customary indicators
are calculated. Obviously, there are presently a great many
indicators that refer to specific analysis cases and that take
various utilisation criteria into account. However, determining
these indicators can help the social psychology researcher to
break free from the classic inferential model used in statistical
analysis and to opt for a method for assessing his data based on
a more equitable assessment of the effects. Many researchers
expose the imperialism of the inferential method and the .05
value as objectionable and recommend that these indicators be
imposed (Thompson, in press). Therefore, if we use these indi-
cators, we would be led to view some of our theoretical analy-
ses and interpretations in a different light.
References
Berk, R. A., & Freedman, D. (2003). Statistical assumptions as empiri-
cal commitments. In T. G. Blomberg and S. Cohen (Eds.). Law,
Punishment, and Social Control: Essays in Honor of Sheldon Mess-
inger (2nd ed.) (pp. 235-254). New York, NY: Aldine.
Cohen, J. (1988). Statistical power analysis for the behavioral sciences
(2nd ed.). Hillsdale, NJ: Erlbaum.
Guéguen, N., & Fischer-Lokou, J. (1999). Sequential request strategy:
Effect of donor generosity. The Journal of Social Psychology, 139,
669-671.
Harris, M. B. (1972). The effects of performing one altruistic act on the
likelihood of performing anaother. The Journal of Social Psychology,
88, 65-73. doi:10.1080/00224545.1972.9922544
Lipsey, M. W., & Wilson, D. B. (2000). Practical Metanalysis. Thou-
sand Oaks, CA: Sage Publications.
Rosenthal, R., & Rosnow, R. L. (1991). Essentials of behavioral re-
search: Methods and data analysis (2nd ed.). New York: McGraw
Hill.
Thompson, B. (in press) Research synthesis: Effect sizes. In J. Green, G.
Camilli, & P.B. Elmore (Ed.), Complementary methods for re-
searchin education. Washington, DC: American Educational Re-
search Association.