The Core

BillJanie@aol.com
Thu, 7 Mar 1996 15:01:43 -0500

--PART.BOUNDARY.0.29559.mail06.mail.aol.com.826228901
Content-ID: <0_29559_826228901@mail06.mail.aol.com.6894>
Content-type: text/plain

Attach file core.

--PART.BOUNDARY.0.29559.mail06.mail.aol.com.826228901
Content-ID: <0_29559_826228901@mail06.mail.aol.com.6895>
Content-type: text/plain;
name="CORE"
Content-Transfer-Encoding: quoted-printable

Lois and Rainer,
=0D
You both appear to anticipate that corresponding =

regressions (CR) will be a type of path analysis. =

This is not strictly so. True, we will use path
diagrams to visually portray the sequences =

of causes, but CR is not analytically a type of =

path or structural equation modeling. CR is an =

inductive procedure. Path analysis follows the =

hypothetico-deductive strategy. This will be
more evident later, after CR is explained and =

can be contrasted with path analysis. In this =

posting I would like to propose the core of the =

method of corresponding regressions. =

=0D
The Core of the Method
=0D
First, lets return to points made in an early =

posting that reveal the core of CR. Let's =

create a causal model. Let Y =3D X1 + X2, =

where X1 and X2 are two columns containing =

48 uniformly distributed random numbers, and =

Y is their sums across each row. Thus Y is =

logically dependent on the both X1 and X2
while X1 and X2 are logically independent of =

one another and Y- since they were generated =

independently of one another and of Y. =

=0D
In terms of logical implication, the value of Y =

carries implications concerning the probable =

values of X1 and X2. A high value of Y implies =

the random pairing of high values of X1 and =

X2. A high value of an X variable, however, =

does not imply a high value of Y. A high X1 =

value could have been randomly paired with =

a low X2 value, creating a mid-range value =

of Y. Another way to think of this is that a =

child implies sexually mature adults =

(parents), while sexually mature adults do =

not necessarily imply a child. They may be =

childless.
=0D
In other, words, X1 and X2 are independent =

variables (formal causes) and Y is
their dependent variable (formal effect).
=0D
X1 and X2 are uncorrelated with one another. =

Their conjugation (pairing) is random.
Both X1 and X2 correlate with Y at about .7, =

since they each explain about half of Y. =

Remember that it is the coefficient of determination =

(square of the correlation) that reveals the percent =

of overlap in variables. The square of .7 is about .5., =

i.e 50%.
=0D
Numerical Example
=0D
The following variables were generated by the =

above strategy. I list them so that they are sorted =

by Y,in order to conserve space. When they came =

out of the computer originally, they were in random =

order.
=0D
Lowest Y's The midranges of Y
Part A Part B Part C
X1 X2 Y X1 X2 Y X1 X2 Y
0 1 1 3 5 8 4 6 10 =

0 1 1 2 6 8 8 2 10
0 2 2 0 8 8 5 5 10
2 1 3 4 4 8 8 2 10
0 3 3 4 5 9 5 6 11
1 2 3 3 6 9 9 2 11
3 3 6 7 2 9 9 2 11
2 4 6 6 3 9 2 9 11
0 6 6 3 6 9 9 2 11
1 6 7 1 8 9 9 3 12
5 2 7 4 6 10 4 8 12
6 1 7 5 5 10 7 5 12
=0D
Highest Y's
=0D
Part D
X1 X2 Y
=0D
3 9 12
9 3 12
7 6 13 =

7 6 13
6 7 13
5 9 14
8 6 14
8 7 15
7 8 15
8 7 15
6 9 15 =

9 7 16
=0D
Partitioning by Ranges
=0D
As above, first sort all three columns by Y, so that =

the rows of Y go from least to most in value. Now, =

take the rows of Y, X1 and X2 that contain the =

lowest 12 values of Y (Part A) and the rows of Y, =

X1, and X2 containing the highest 12 values of Y =

(PartD). Concatenate these two matrices and call =

them the extremes by Y (EOY). Partition the
remaining matrix of Y, X1, and X2 values- those =

containing the mid-range of Y values (Parts B & C) =

- and call them the midrange by Y (MOY).
=0D
Our data have been partitioned into two sets, the =

data corresponding to the extreme values of Y =

(EOY) and those corresponding to the midrange of =

Y (MOY).
=0D
Correlate Y, X1, and X2 in EOY:
=0D
X1 X2 Y
X1 1.00 .49 .88
X2 .49 1.00 .84
Y .88 .84 1.00
=0D
Correlate Y, X1, and X2 in MOY:
=0D
X1 X2 Y
X1 1.00 -.88 .59
X2 -.88 1.00 -.13
Y .59 -.13 1.00
=0D
You get very different correlations across the =

two matrices. At the extremes of Y (EOY), =

variables X1 and X2 tend to be positively =

correlated. In the midrange by Y matrix, X1 and =

X2 will tend to be negatively correlated. This is
because at the extremes of Y, the X variables =

are similar to one another. At the mid-range of Y
(MOY), the X values tend to be different and they =

cancel one another out to produce mid-range =

values of Y.
=0D
Thus the correlations between the X (independent) =

variables polarize across the extreme versus =

midrange of Y (the dependent variable), i.e =

=2E49 versus -.88. This is the core of the method of =

corresponding regressions. The polarization only =

occurs when the data are sorted by the dependent =

variable, not by the independent variable(s).
=0D
To show that the polarization does not occur when =

the data are sorted by independent variables, put the =

two matrices back together, to return to our
original data. In this matrix, X1 and X2 will tend to be =

correlated zero. In our example the r was -.05, which =

is close enough. This is not surprising in that both =

X1 and X2 are just random numbers. Now sort
Y, X1, and X2 by X1. Partition Y, X1 and X2 into =

two matrices that correspond to the extreme of X1 =

(EOX1) and the midranges of X1 (MOX1). Find the =

correlations between the Y, X1, and X2 for each =

matrix.
=0D
EOX1 correlations
=0D
X1 X2 Y
X1 1.00 -.08 .82
X2 -.08 1.00 .50
Y .82 .50 1.00
=0D

MOX1 correlations
=0D
X1 X2 Y
X1 1.00 - .01 .58
X2 -.01 1.00 .81
Y .58 .81 1.00
=0D
No polarization of X1 to X2 correlations occurs, =

i.e. -.08 versus -.01. The independent variables =

X1 and X2 will still be correlated approximately =

zero. The same happens when you partition by X2.
=0D
The polarization of the correlation between =

independent variables across the ranges of the =

dependent variable, but not across the ranges =

of the independent variables, is the core of the =

method of corresponding regressions.
The rest is just expressing this fact in the =

algebra of regression analysis. If we can not =

agree on this polarization business, the rest =

will just be only fancy math and will only serve =

to confuse things. It is the CORE of the =

method of corresponding regressions.
=0D
What do ya'll think of this core?
=0D
Bill =

=0D
References
=

Chambers, W. V. (1991). Inferring formal causation =

from corresponding regressions. Jounral of Mind =

and Behavior, 12,1, 49-70.
=0D
Lamiell, James T. (1991). Beware the illusion of =

technique. JMB, 12, 1, 71-7
=0D
Williams, R. N. (1991) Untangling cause, necessity, =

temporality and method: response to Chambers' method =

of correspondinge regressions. JMB, 12,1,77-83.
=0D
Chambers, W. V. (1991). Corresponding regressions,
procedural evidence, and the dialectics of substantive =

theory, metaphysics and methodology. JMB, 12,1, 83-92.
=0D

=

=0D

--PART.BOUNDARY.0.29559.mail06.mail.aol.com.826228901--

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%