Statistics: Estimating factor scores

From MathWiki

Table of contents

A Comparison of Four Methods for Estimating Common Factor Scores.

Authors
Ambrosino, Robert J.
Descriptors
Comparative Analysis; Factor Analysis; Factor Structure; Oblique Rotation; Orthogonal Rotation; Statistical Analysis; Validity
Journal/Source Name
N/A
Journal Citation
N/A
Peer-Reviewed
N/A
Publisher
N/A
Publication Date
1973-11-00
Pages
21
Pub Types
N/A
Abstract
A comparison of four procedures for estimating common factor measurements was made using artificially synthesized "data" matrices. Score estimates were compared with respect to how well they approximated associated true factor scores and the extent of shrinkage in double cross-validation based on random samples. The Horst (1965), Bartlett (1937), and Anderson and Rubin (1956) methods gave what was judged as satisfactory estimates for the (artificial) populations of data. The cross-validational procedures showed the Horn (1965) method to yield highly unstable estimates. It was concluded that the method of using columns of the factor loading matrix as weights to be used in estimating factor measurements cannot be recommended for general applications since this procedure consistently provided highly unstable estimates. (Author)
Abstractor
N/A
Reference Count
N/A
Note
Paper presented at Northeastern Educational Research Association (Ellenville, N.Y., November 1, 1973)
Identifiers
N/A
Record Type
Non-Journal
Level
1 - Reproducible in paper and microfiche; and, since 1993, in electronic format; materials issued from January 1993 - July 2004 are now available at no cost through this Web site

Estimation or prediction

Suppose that factor loadings are known and that:

Yi = ΛXi + εi

where VAR(Yi) = Σ, VAR(Xi) = I and VARi) = Δ with Δ a diagonal matrix.

We obtain the joint variance VAR \left( \begin{matrix} Y_i  \\ X_i \end{matrix} \right)  = \begin{bmatrix} \Lambda&  I \\ I & 0 \end{bmatrix} \begin{bmatrix} I&  0 \\ 0 & \Delta \end{bmatrix} \begin{bmatrix} \Lambda'&  0 \\ I & I \end{bmatrix} = \begin{bmatrix} \Lambda \Lambda' + \Delta &  \Lambda' \\ \Lambda & I \end{bmatrix} Under normality we could consider estimating Xi in a number of ways:

Prediction

E( X_i | Y_i) = \Lambda' \left( \Lambda \Lambda' + \Delta \right) ^{-1} Y_i

Ordinary least-squares estimate

\hat{X}_i = \left( \Lambda' \Lambda \right)^{-1} \Lambda' Y_i

Weighted least-squares estimate

\hat{X}^W_i = \left( \Lambda' \Delta ^{-1} \Lambda \right)^{-1} \Lambda'  \Delta ^{-1}  Y_i

Relationships

E(Xi | Yi) = \Lambda' \left( \Lambda \Lambda' + \Delta \right) ^{-1} Y_i
=\Lambda' \left( \Delta^{-1} - \Delta^{-1} \Lambda \left[ I +  \Lambda' \Delta^{-1}    \Lambda \right]^{-1} \Lambda'\Delta^{-1}   \right)  Y_i
=\left( I + \Lambda' \Delta^{-1}    \Lambda \right)^{-1} \left( \Lambda' \Delta^{-1}    \Lambda \hat{X}^W_i + I 0 \right)