Bill Chambers on factor analysis

CSU (csu@brain.wph.uq.oz.au)
14 Mar 1996 08:19:31 -0500

Bill,

I have had trouble keeping up with all your posts on regression
and the responses. To do them justice I will file them away for
a rainy day. In relation to the post you sent to me I will offer
a few points.

Regarding Brunswiks's work and developments from it, a reference
is: Brehmer B and Joyce C (1988). Human Judgment: The SJT View.
North-Holland, Amsterdam. This approach does not utilise
constructs as PCP does, though some of the work I have read
indicates an attempt to sample relevant constructs/variables by
'interviewing' key participants in an area.

I think I understand your 'mob rule' example, though in the case
of a regression it would seem very inappropriate to sum ribbon
and introversion variables and then try to account for the
variance in the resulting combined personality score. I would
have thought a more typical example would be to include the
ribbon and introversion as dependent variables and try to account
for an IV, such as scores on some other measure e.g., age or
educational achievement.

I appreciate your factor analysis example may be an extreme one
to illustrate a point. I say this for two reasons, 1) if
personal constructs are adequately sampled then there should be
a greater diversity of constructs than in the example given and
2) the results of any analysis should bear in mind the raw data.
Manual review of the raw data and the respective element/construct
correlation matrices should to an extent alert a person to the
concerns which you raised.

One response to the issues you raise is to perform different
analyses and examine the results. Presently I am comparing
the PCA output from Ingrid and G-Pack, with cluster analyses
for a number of grids to examine differences and/or similarities
between the results obtained from these procedures. I feel this
is an important aspect of reliability.

In relation to your responses to averaging construct correlations
and entering these into a regression, I believe the correlations
were Pearson correlation coefficients. For example, the correlation
between release and likelihood of harm construct ratings for persons
1-40 were calculated and totalled. While individual correlations were
mostly positive with some negative correlations, the average group
correlation was high positive (e.g., .7).

A final point regarding corresponding regressions. Is this an
original idea or has it an earlier history? I ask this because
I have not come across the term before. I found the explanations
hard to follow, though would be interested in the possibility of
you performing an analysis on some data I have. If you had the
time I have two possible data sets. I have 40 eight common
element x 10 construct (6 elicited + 4 supplied) grids.
An analysis of the supplied constructs would be interesting as
it would allow comparison with other results.

Alternatively, if an individual grid can be analysed in the
manner described, an analysis of just 1-4 of the above grids is
another option. The common elements in all these grids were case
scenarios which allows some consideration of the meaningfulness of
any results which are obtained.

If my suggestion to perform some analyses is feasible let me know.

Regards,

Bob Green.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%