Psychology
Vol.09 No.06(2018), Article ID:85453,7 pages
10.4236/psych.2018.96078

Fame in Psychology: A Pilot Study

Adrian Furnham

Norwegian Business School, Oslo, Norway

Copyright © 2018 by author and Scientific Research Publishing Inc.

This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).

http://creativecommons.org/licenses/by/4.0/

Received: April 28, 2018; Accepted: June 19, 2018; Published: June 22, 2018

ABSTRACT

An opportunist sample was asked to nominate nine psychologists under different categories. Participants, all qualified psychologists, reported finding the task both challenging and engaging. There was little agreement between participants with a number of nominated psychologists appearing on different, sometimes contradictory, lists. Ideas for a more serious and systematic study in the area are suggested.

Keywords:

Fame, Importance, Contribution, History of Psychology

1. Introduction

Ask colleagues the following question: “By whatever criteria you choose, who would you nominate as the most important psychologist of all time?” You could ask them to provide a short-list. This often takes people by surprise and they take a long time to answer. However, this yields the “usual suspects” but some very occasional surprises. Their reactions seem to differ somewhat by the age of the respondent, which country they come from and their particular research speciality.

Then try to get them to nominate the most important “Living psychologist”, “European psychologist”, “Female psychologist”, “Cognitive psychologist” etc. Many are flummoxed, often by not being able to nominate any candidate in this category, rather than choose between them. A few are quite clearly embarrassed.

Having done this personal “research” a few times I have noticed the following. First, often people do not know who is alive or dead. Second, few (including women) can recall any female psychologists (even the famous psychoanalysts). Americans have no idea there are any European psychologists! Third, academic psychologists are blinkered by their own specialisms. Fourth, the same psychologists are nominated as heroes and charlatans. Fifth, most are pretty incoherent about the criteria they used if you probe that issue. It is a deeply sobering and rather disheartening experience.

Of course, this is a recall not recognition task, but it may be the better for it. If you get people to rate a list including such people as Bandura, Ekman, Kahneman, Seligman, Sternberg etc. they happily complete the task, though I suspect a lot of “over-claiming” with respect to knowing their work.

I have collected some data on a rather ad-hoc basis. There are powerful effects of age/cohort: older people recall quite different, even obscure people. Next, there is disciplinary affinity or interests: you discover how narrow the discipline is becoming. Third, there is nationality with evidence of deep ethnocentrism. Finally, authors of popular books always outshine academic papers.

1.1. Fame and Forgetting

It is very easy to get forgotten: see the paper by Thorndike (1955) where APA fellows were asked to nominate the psychologists who had most made a contribution to the discipline. Half at least are names few could now recall. But why? Some highly imaginative and productive scholars get forgotten. We each, no doubt have our favourites: those we believe unfairly forgotten as well as those who we believe quite undeserving of their popular acclaim.

Currently, the more mature debate has moved onto recognition and legacy (Simonton, 2016; Sternberg, 2016) , though many papers are more about measuring and evaluating academic contribution rather than fame. This is the fame vs merit debate and the simple question of their relationship. Are we too obsessed by measuring “scientific” merit than rewarding work that in some sense leads to the wider betterment of people?

1.2. How to Measure Contribution

The topic of how to assess, evaluate or measure academic achievement is long established and highly debatable. I contributed to this debate when Citations began to be relatively easily calculated (Furnham, 1990; Furnham & Bonnett, 1992) . In the 1980’s Endler and Rushton caused consternation by their analysis of Anglo-Saxon psychologists (Endler, 1987; Endler, Rushton, & Roediger, 1978; Rushton, 1989; Rushton, Murray, & Paunonen, 1983) .

Sternberg (2016) listed possible twelve indices to consider in evaluating others. It is a comprehensive list and I can think of few others. One is what the British government regulators call “impact” meaning consequences for the service of society. We all have to supply “data” that fulfils that criteria but that is very difficult to measure. Sternberg’s list was: Departmental judgements; Letters from distinguished referees; Quantity of publications; Quantity of publications controlling for impact factors; Number of Citations; h; i10; Grants and contracts; Editorships; Service on major grant proposals; Awards; and Honorary doctorates

There is also “alt-metrics” like coverage of work in the press and new media, number of paper downloads and hits, appearances in quality programmes. More and more it is possible to see on a daily basis who has viewed and downloaded a paper. This is, of course, not the same as reading, quoting or evaluating the work but is probably significantly positively correlated with it. I imagine that, like alternative medicine that works, and therefore becomes orthodox, so alt-metrics will soon find their way into the “scientometric patheon”.

The data divide, in my view into three categories: Observation data being the ratings of others (awards, references), which are notoriously unreliable and open to manipulation; Output data (quality and quantity of papers) and Income. Whilst the latter two are easier to measure, for many it is the quality of papers that is most important and can be measured by the impact factor of the journal and/or the citations to the paper some years later.

1.3. This Study

I thought it was time I collected some data more systematically so I devised a simple short “free response” test online. It was as much for personal curiosity as serious research but the results are of significant interest to perhaps repeat the study more seriously.

2. Method

2.1. Participants

In all there were 101 participants of which 60 were male. They ranged in age from 22 to 74 years with a mean of 30.27 yrs and an SD of 15.08. 93 had a postgraduate degree in psychology. In all 12 identified themselves as Applied, 5 as Biological, 9 as Cognitive, 10 as Social, 22 as Clinical, and 43 as “other” psychologists. They gave their nationality the majority coming from America (23), Britain (22) and Germany (15).

2.2. Questionnaire

This was designed and administered online by the Qualtrics methodology. It went:

“Determined entirely by your own criteria, who in your view is

Please note… You can put more than one name in each category if you wish”

1) The greatest psychologist of all time?

2) The greatest living psychologist?

3) The greatest female psychologist?

4) The greatest European psychologist?

5) The most over-rated psychologist?

6) The greatest personality psychologist?

7) The greatest experimental psychologist?

8) The greatest American psychologist?

9) The most neglected psychologist?

They were free to write in any, or no, response to each question. Participants also gave their sex, age, nationality and specialism.

2.3. Procedure

I sent the questionnaire to approximately 20 colleagues asking them to complete it and send on to their friends or societies who may they thought would be interested to take part. After a month I collected and analysed the data. Thus from 20 requests I got a hundred responses. Many sent me short messages such as the following: “Hah! No more than 2 minutes! It’s a much more difficult task than you might think! Maybe because I’ve conducted research on this topic. A major difficulty, for example, is what counts as a psychologist. Are we to include Noam Chompsky, Ivan Pavlov, or Sigmund Freud? Or, for that matter, Gustav Fechner? Anyway, I’ll get back to this when I have 2 hours at my disposal.”

3. Results

3.1. Response Rate

I cannot know how many people received and chose to or not to, complete the questionnaire. However, I did get about half a dozen responses which fell into two categories. Those who were positive said that although the task was difficult, they completed and were very eager to see the results. Those who were negative this was an impossible and divisive test and they would have nothing to do with it. Two wrote me quite a long piece on why the task was difficult and the extent to which they believed any results I would obtain would be of dubious validity.

3.2. Main Nominations

Table 1 shows the results of each question. To prevent the table being far too long a name was included if it had at least two nominations. There were often as many as 20 people in each category who received nominations from only one person. Three observations can be made about the data. First, there was a reasonable amount of missing data where around 75% of respondents seemed unable to answer all the questions. Second, there was considerable variation with around 25% of people nominated in each question unique to that respondent. Third, the same people appeared on more than one list even when the category seemed contradictory (i.e. Greatest and Most Over-rated).

4. Discussion

The first observation from the table is that it essentially contains the “usual suspects”. Indeed, many of the people appeared on many lists, often those that contradicted one another. Thus, Freud was nominated the greatest but also the most over-rated of psychologist. Next, people seemed incorrect in their nominations: Williams James was not a European Psychologist and it would be probably incorrect to suggest that Zimbardo was an experimental psychologist.

To give some idea about the variability of the nominations the following people were nominated (presumably in all seriousness) as the greatest psychologist of all time but by just one person: Bruner, Frankl, Hebb, Meehl, Rogers and Tajfel while the most overrated were Aronson, Buss, Gilbert, Goleman and

Table 1. Results for each of the 10 questions.

Kahneman. I think there were less than a dozen people from all lists combined that I did not personally recognise.

This was very much a pilot study: one that intrigued and annoyed people in equal measure. They found it surprisingly difficult despite the fact that many were academic psychologists.

Given problems of recall vs recognition it may be possible to give respondents a long list of famous psychologists in alphabetical order. A good starting point could be half (i.e. 100) of the eminent psychologists identified by Diener et al. (2014) . Participants could choose to answer all the above (and indeed more) questions, by ranking order three for each question. The list would, of course, exclude chosen by people not on the list, most often those coming from non Anglo-Saxon speaking countries.

This study had limitations particularly the very limited sample. It should be possible to tap into whole national societies like the American Psychological Association or the British Psychological Society to try to get a large representative national sample. It might also be a good idea to extend the list of questions such as “The psychologist who has had most impact in people’s lives?” or “The greatest Asian psychologist”.

5. Conclusion

In conclusion, this was a pilot study on how certified and chartered psychologists thought about their peers. It required them to reflect on questions that they appeared not to have done so before. The results were not particularly surprising though what some have said is that they were more surprised by omissions and that the list seemed very conventional.

Cite this paper

Furnham, A. (2018). Fame in Psychology: A Pilot Study. Psychology, 9, 1284-1290. https://doi.org/10.4236/psych.2018.96078

References

  1. 1. Diener, E., Oishi, S., & Park, J. (2014). An Incomplete List of Eminent Psychologists of the Modern Era. Archives of Scientific Psychology, 2, 20-32. https://doi.org/10.1037/arc0000006 [Paper reference 1]

  2. 2. Endler, N. (1987). The Scholarly Impact of Psychologists. In D. Jackson, & J. Rushton, (Eds), Scientific Excellence, Origins and Assessment. Newbury Park, CA: Sage. [Paper reference 1]

  3. 3. Endler, N., Rushton, J., & Roediger, H. (1978). Productivity and Scholarly Impact (Citations) of British, Canadian and US Departments of Psychology (1975). American Psychologist, 33, 1064-1083. https://doi.org/10.1037/0003-066X.33.12.1064 [Paper reference 1]

  4. 4. Furnham, A. (1990). Quantifying Quality: An Argument in Favour of Citation Counts. Journal of Further and Higher Education, 14, 105-110. https://doi.org/10.1080/0309877900140208 [Paper reference 1]

  5. 5. Furnham, A., & Bonnett, C. (1992). British Research Productivity in Psychology 1980-1989. Personality and Individual Differences, 13, 1333-1341. https://doi.org/10.1016/0191-8869(92)90176-P [Paper reference 1]

  6. 6. Rushton, J. (1989). A Ten-Year Scientometric Revisit of British Psychology Departments. The Psychologist, 2, 6468. [Paper reference 1]

  7. 7. Rushton, J., Murray, H., & Paunonen, S. (1983). Personality, Research Creativity and Reaching Effectiveness in University Professors. Scientometrics, 5, 93-116. https://doi.org/10.1007/BF02072856 [Paper reference 1]

  8. 8. Simonton, D. K. (2016). Giving Credit Where Credit’s Due. Perspectives in Psychological Science, 11, 888-892. https://doi.org/10.1177/1745691616660155 [Paper reference 1]

  9. 9. Sternberg, R. (2016). “Am I Famous Yet?” Judging Scholarly Merit in Psychological Science. Perspectives in Psychological Science, 11, 877-881. https://doi.org/10.1177/1745691616661777 [Paper reference 2]

  10. 10. Thorndike, R. (1955). The Psychological Value Systems of Psychologists. American Psychologist, 10, 787-789. [Paper reference 1]