Re: ruminations on the big 5 etc.

anima@devi.demon.co.uk
Sun, 8 Oct 1995 22:04:19 +0000

Jim Mancuso quotes me

>>a) The big 5 in multivariate personality analyses because of the way in
>>which we chunk information, 7 + / - 2? (Weeell, I'd like to see an account
>>of the mechanisms that make the former a function of the latter!)

and gives a careful account of how the 7 +/- 2 rule seems to apply to "the
standard number of superordinate" constructs in a construct system,
visiting Osgood's big 3 en route but not referring particularly to the
psychometric
big 5.

To be honest, I couldn't see some of the connections his account was trying
to establish, maybe because I haven't read the Rosenberg reference to which
he referred.

However, he didn't really address the main point of my quote, which is
really as follows. Assuming the 7 +/- 2 reference is to Miller's "Magic
number" paper (eek, I hope it is, otherwise I _am_ confused!), I fail to
see what explanatory connection there can possibly be between an early
paper on some interesting limitations and encoding processes in human
information processing and the tendency for psychometricians to interpret
their factor analyses in such a way that 5 second-order factors reliably
emerge.

No, I'm _not_ saying that factor extraction is an arbitrary process; _nor_
am I saying that they aren't wrong when they think there is some "real"
existence to the 5 factors they identify just because they do so fairly
reliably.

My point is completely different, and a bit more subtle, I feel.

Information-processing models are based ultimately on an analogy with
Boltzman's negentropy, and take much of their power from the idea of
information as uncertainty-reduction as first defined statistically for
communication systems by Claude Shannon.

It is a _gross_ error, confusion, and mistake to identify Shannon's use of
"information as uncertainty reduction" with the meaning which a human
observer gains when s/he receives a message or takes a decision which
chooses amongst some set of alternatives!

Anyone who may be tempted to do so has only to glance at the early pages of
Shannon and Weaver, the standard psychologist's introduction to Shannon's
theory, where Shannon expresses doubts about similarities between
statistical uncertainty reduction and what human beings experience when
they move from lesser to greater psychological certainty, which is what I
call "meaning creation". (True, he tentatively suggested a
statistical-information-like measure which he called "surprisal" as being
more related to meaning creation, but that seems to have proved a dead
end.) Except for some proto- cognitive-psychology lab experiments of the
1960s occasioned partly by the stimulus Miller's paper gave to
information-processing models in human cognition, the information-meaning
identity was dropped rather early on: see, e.g. the early work in
subjective probability and decision making of people like Edwards, and
Phillips, who turned to Baysean statistics as a more useful way of
attempting to measure changes in psychological meaning than Shannon-type
information theory.

In short, you can measure information in bits and/or chunks if you like,
and talk about the properties of the brain as a physical system by means of
models based on statistical information theory as Miller did: but you can't
measure meaningfulness according to this metric!

Maxwell's Demon is an actor in a physicist's thought-experiment, not a
human agent, dammit!

And it seems to me that it's a wee bit of a confusion of realms of
discourse to regard Miller's paper as being about meaning: and it's meaning
which I understand the psychometrician is trying to create when s/he's
looking at a factor solution when exploring the way in which variance is
partitioned in, as Jim points out, large samples of respondents.

kind regards,

Devi Jankowicz.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%