I have discussed before the limitations of the discipline as a quantitative science, an issue that has been around since the mid-1900’s. When one critically reviews the discipline that is psychology, quantitative science rarely comes to mind.
The great theorists such as Freud, Jung, Allport, Maslow, etc. did not rely heavily on supporting their work with numbers. The idea that people could be reduced to common constructs is the anathema to work by the likes of Georg Kelly and personal construct theory. However, Kelly’s work is hard to commercialize because it is based on the idea of working with the individual and their personal constructs of the world.
Kelly and other researchers understood measurement to be more about well-founded inference rather than quantification. In this sense, their definition and application of measurement were far less rigid, but perhaps no less valid, than that provided by numbers.
I believe the literature is pretty clear that many commonly accepted terms in I/O psychology, like competencies, do not stand up from a measurement perspective but they remain on PRACTICAL GROUNDS for practitioners. We, therefore, have to make decisions on how to address this gap between what is a useful concept and how to make valid and justifiable inferences from it.
I have referred to this in another forum and I quote from that here:
“Until there are clear guidelines how MM (multi-method) data can be combined to make judgements (or if not combined as the case should be, how MM can be brought together using some heuristic or the like), then the dimension school will remain despite the literature to the contrary. This, I think, is the biggest gap at the moment and until this is addressed with very clear guidelines (i.e. ‘a how to’) dimensions will remain the preferred methodology by practitioners (but not OPRA as you know)”.
I believe the solution resides in evaluation methodology. Evaluation is determining the worth, merit, and significance of what is being evaluated, whether that is an individual for a job or a particular training intervention. I have been fortunate enough to be trained in this field under Professor Michael Scriven and Dr. Jane Davidson through the Claremont Graduate School in Los Angles.
For those that are not familiar with evaluation as a discipline, I strongly suggest an online review of http://www.wmich.edu/evalctr/. This is good a starting point for those wishing to understand the value of evaluation for our discipline. Our reliance on quantitative methods is a MAJOR gap in I/O literature at present and one that the evaluation field has the answer to. Our discipline is waking up to the reality of what is quantitative and what is not, and how strict quantification of areas such MM selection can lead to clearly wrong conclusions.
The questions and problems have been identified. What we haven’t got so far is a clear answer on the way forward. Perhaps the solution involves expanding our mind away from a strict reliance on quantified science and into qualification through evaluation.