Date post: | 05-Jul-2015 |
Category: |
Education |
Upload: | carlo-magno |
View: | 701 times |
Download: | 6 times |
Scale Dimensionality
Dr. Carlo Magno
De La Salle University
Measuring Achievement
Unidimensional Scaling
• Aimed at selecting a set of data items that can be empirically demonstrated to correspond to a single social-psychological dimension (Gordon, 1977).
• Methods:• Thurstone or Equal-Appearing Interval
Scaling • Likert or "Summative" Scaling• Guttman or "Cumulative" Scaling.
Multidimensional Scaling
• There is more than a single dimension that underlies a set of observations.
• Aside from measuring the degree of the object, it classifies the object according to two or more properties.
• The trait being measures as a whole is called a latent construct/variable.
• The components of the latent are called factors/subscales/manifest variables.
Multidimensional Scaling
• Methods of analyzing multidimensionality:– Factor Analysis– Exploratory Factor Analysis
• Principal Component Analysis• Joining Tree Clustering• K-Means clustering
– Confirmatory Factor Analysis• Structural Equations Modeling (SEM)
Questions?
• Give examples of unidimensional and multidimensional constructs.
• How do you know if a construct is unidimensional or multidimensional?
• Sandra Bem (1980) – BSRI
• Aggression Scale
Gender Role Identity
Masculinity
Femininity
Aggression
Physical
Verbal
• Ambivalent Sexism Inventory by Fiske and Glick (1996)
• 2 Factor Theory of Intelligence by Charles Spearman
Sexism
Benevolent sexism
Hostile sexism
Intelligence
G Factor
S factor
• Theory of Intelligence by James McKeen Cattell
• Myers Briggs Type Indicator
Intelligence
Crystallized Intelligence
FluidIntelligence
Temperament
Extraversion/Introversion
Sensing/Intuition
Thinking/Feeling
Judging/Perceiving
Factor Analysis
• (1) to reduce the number of variables and (2) to detect structure in the relationships between variables, that is to classify variables.
• A factor is a set of highly intercorrelated variables.
• Principal Component Analysis– Factor Loading– Eigenvalue– Factor Rotation
• Principal Factor Analysis– Communalities
Principal Component Analysis
• Combining variables that are highly correlated with each other.
• Factor Loadings-the correlations between the factors and the variables.– Factor loading of .30 – the variable contributes
meaningfully to the factor.– Factor loadings of .40 – high contributions to the
factor– If the the factors and the variables are strongly
correlated it means that they are measuring the same thing.
Eigenvalues• A measure of how much variance each
successive factor extracts.
• the first factor is generally more highly correlated with the variables than the second factor. This is to be expected because these factors are extracted successively and will account for less and less variance overall.
• Factor extraction stops when factors begin to yield low eigenvalues.
Eigenvalues
• Methods of evaluating eigenvalues– The Kaiser criterion. retain only factors with
eigenvalues greater than 1. Unless a factor extracts at least as much as the equivalent of one original variable, it is dropped (Kaiser, 1960).
– The scree test. A graphical method where the eigenvalues are placed in a line plot. The place where the smooth decrease of eigenvalues appears to level off to the right of the plot. To the right of this point, presumably, one finds only "factorial scree" - "scree" is the geological term referring to the debris which collects on the lower part of a rocky slope.
Factor Rotation• Maximize the variance (variability) of the "new"
variable (factor), while minimizing the variance around the new variable.
• Factor loadings are plotted in a scatterplot. In that plot, each variable is represented as a point. In this plot we could rotate (clockwise/ counterclockwise) the axes in any direction without changing the relative locations of the points to each other; however, the actual coordinates of the points, that is, the factor loadings would of course change.
• To better position the factor axes through, at least nearer to, clusters of highly correlated variables.
Why are factors rotated?
• Factors are redefined in a second phase factor analysis in order to achieve a simpler factor structure that is interpretable.
• Factor rotation enables the researcher to meet conditions such as:– Each variable loads strongly on one and only one
factor– Each factor shows two or more strong loadings– Most loadings are high or low, with few intermediate
value
Methods of Factor Rotation
• Orthogonal Rotation – Varimax Rotation – Quartimax
– Equamax
• Oblique Rotation
Unrotated Factors
Orthogonal Rotation
Orthogonal Rotation
• The perpendicularity of factor axes is maintained.
• The rotated factors explains the same total variance as do the unrotated factors.
• The proportion of variance of individual variables that is explained changes (change in communalities).
Types of Orthogonal Rotation
• Varimax rotation – Maximizes the variance of the squared factor loadings associated with each factor. Maximizes the factor loadings for each column.
• Quartimax Rotation – Minimizing the cross-products of the factor loadings. Maximizes the factor loadings for each item.
• Equimax rotation – compromise between varimax and equimax. Enhance interpretability between factors and items (variables).
Oblique Rotation
Oblique Rotation
• Provides greater flexibility in positioning factor axes by relaxing the requirement that the factors be orthogonal.
• Axes are positioned directly through clusters of highly correlated variables.
• Create factors that more strongly represent clusters of highly correlated variables than do orthogonal rotation.
What rotation should be used?Orthogonal Oblique
•Purpose: To create a reduced set of orthogonal factor variates that will be entered into other analyses.
•If the loadings produced are interpretable then stick to simpler solutions.
•If the FA is being used to uncover the factor structure underlying a set of variables.•If there is sound theoretical reasons to expect high correlations
Reading for Factor Rotation
Deikohoff, G. (1992). Statistics for the social and behavioral sciences: Univariate, bivariate and multivariate. Dubuque, IA: Wm. C. Brown Publishers.
Principal Factor Analysis
• Also known as Principle Axis Functioning
• The factors does not extract all variances in the items (presence of scree)
• Communalities - proportion of variance of a particular item that is due to common factors (shared with other items) is called communality.
What is the difference between PCA and PFA?
• PCA each squared factor loading gives the proportion of variance in an original variable that is explained by a given factor variate.
p x 1.0• Communalities are sum of squared
loadings for given variable across factors. Proportion of variance in a variable which is explained by the set of extracted factor variates.
Exploratory Factor Analysis
• Absence of prior hypothesis regarding the factor structure underlying a battery of attitudes (Tucker & McCallum, 1997).
• Used in the early stages of research on developing a concept.
• Hypothesis will be at best loosely defined, the general objective of the research is to explore the factorial structure of the domain.
Exploratory Factor Analysis
• Analysis:– Involves estimation of all parameters of the
model (common factor weights, intercorrelations, and unique variances)
• Use– Provides aid for the researchers to determine
the number of factors, interpreting the nature of those factors, and refining the battery of attributes of the for the purpose of further study of the domain.
Exploratory Factor Analysis
• Techniques– Tree Analysis– K-means cluster analysis
– Principal component analysis– Multidimensional scaling
Procedure
• Identify the items that will be analyzed through EFA
• Calculate the correlation matrix• Examine the matrix
– Inverse of the matrix– Bartlett Test– Anti-image covariance matrix– Kaiser-Meyer-Olkin-Criteria (KMO)
• Show the factor loadings
Procedure
• Decide on the number of factors to be extracted using Scree –test and elbow method
• Interpret the factors extracted
Statistical Output• The Kaiser-Meyer-Olkin (KMO)- measure of sampling
adequacy tests whether the partial correlations among variables are small.
• Bartlett's test of sphericity -tests whether the correlation matrix is an identity matrix, which would indicate that the factor model is inappropriate.
• Anti-image- Contains the negatives of the partial correlation coefficients, and the anti-image covariance matrix contains the negatives of the partial covariances. In a good factor model, most of the off-diagonal elements will be small. The measure of sampling adequacy for a variable is displayed on the diagonal of the anti-image correlation matrix.
Confirmatory Factor Analysis
• There is a developed and specific hypothesis about the factorial structure of a battery of attributes.
• The hypothesis concerns the number of common factors, their pattern of intercorrelation, and pattern of common factor weights.
• Used to indicate how well a set of data fits the hypothesized structure.
• The structure is hypothesized in advance. • Follow-up to a standard factor analysis
Confirmatory Factor Analysis
• Allows the investigator to fit common factor model to observed data under various types of constraints.
• Analysis: The parameters of the model is estimated, and the goodness of fit of the solution to the data is evaluated.
• The degree to which the solution fit the data would provide evidence for or against the prior hypothesis.
Confirmatory Factor Analysis
• A solution which fit well would lend support for the hypothesis and provide evidence for construct validity of the attributes and the hypothesized factorial structure of the domain as represented by the battery of attributes.
• CFA is integrated in Structural Equation Modeling (SEM), helping create the latent variables modeled by SEM.
• It is done to validate a scale or index by demonstrating that its constituent items load on the same factor, and to drop proposed scale items which cross-load on more than one factor.
Approaches in CFA• Traditional Approach - Uses principle axis
factoring (PAF/PFA). This method allows the researcher to examine factor loadings of indicator variables to determine if they load on latent variables (factors) as predicted by the researcher's model.
• SEM Approach – Model building where the covariance of every pair of factors are determine with their corresponding errors. It tests whether factors are significant components of the latent construct.
Goodness of Fit
• Noncentrality Interval Estimation
• Single Sample Goodness of fit Index
Noncentrality Interval Estimation
• Represents a change of emphasis in assessing model fit. Instead of testing the hypothesis that the fit is perfect, we ask the questions (a) "How bad is the fit of our model to our statistical population?" and (b) "How accurately have we determined population badness-of-fit from our sample data."
Noncentrality Indices
• Steiger-Lind RMSEA -compensates for model parsimony by dividing the estimate of the population noncentrality parameter by the degrees of freedom. This ratio, in a sense, represents a "mean square badness-of-fit."
• Values of the RMSEA index below .05 indicate good fit, and values below .01 indicate outstanding fit
Noncentrality Indices
• McDonald's Index of Noncentrality-The index represents one approach to transforming the population noncentrality index F* into the range from 0 to 1.
• Good fit is indicated by values above .95.
Noncentrality Indices
• The Population Gamma Index- an estimate of the "population GFI," the value of the GFI that would be obtained if we could analyze the population covariance matrix Σ.
• For this index, good fit is indicated by values above .95.
Noncentrality Indices
• Adjusted Population Gamma Index (Joreskog AGFI) - estimate of the population GFI corrected for model parsimony. Good fit is indicated by values above .95.
Single Sample Goodness of fit Index
• Joreskog GFI. Values above .95 indicate good fit. This index is a negatively biased estimate of the population GFI, so it tends to produce a slightly pessimistic view of the quality of population fit.
• Joreskog AGFI. Values above .95 indicate good fit. This index is, like the GFI, a negatively biased estimate of its population equivalent.
Single Sample Goodness of fit Index
• Akaike Information Criterion. This criterion is useful primarily for deciding which of several nested models provides the best approximation to the data. When trying to decide between several nested models, choose the one with the smallest Akaike criterion.
• Schwarz's Bayesian Criterion. This criterion, like the Akaike, is used for deciding among several models in a nested sequence. When deciding among several nested models, choose the one with the smallest Schwarz criterion value.
• Browne-Cudeck Cross Validation Index. Browne and Cudeck (1989) proposed a single sample cross-validation index as a follow-up to their earlier (Cudeck & Browne,1983). It requires two samples, i.e., the calibration sample for fitting the models, and the cross-validation sample.
• Independence Model Chi-square and df. These are the Chi-square goodness-of-fit statistic, and associated degrees of freedom, for the hypothesis that the population covariances are all zero.
Single Sample Goodness of fit Index
• Bentler-Bonett (1980) Normed Fit Index. measures the relative decrease in the discrepancy function caused by switching from a "Null Model" or baseline model, to a more complex model. This index approaches 1 in value as fit becomes perfect. However, it does not compensate for model parsimony.
• Bentler-Bonett Non-Normed Fit Index. This comparative index takes into account model parsimony.
• Bentler Comparative Fit Index. This comparative index estimates the relative decrease in population noncentrality obtained by changing from the "Null Model" to the k'th model.
Single Sample Goodness of fit Index
• James-Mulaik-Brett Parsimonious Fit Index. Compensate for model parsimony. Basically, it operates by rescaling the Bentler-Bonnet Normed fit index to compensate for model parsimony.
• Bollen's Rho. This comparative fit index computes the relative reduction in the discrepancy function per degree of freedom when moving from the "Null Model" to the k'th model.
• Bollen's Delta. This index is similar in form to the Bentler-Bonnet index, but rewards simpler models (those with higher degrees of freedom).
Single Sample Goodness of fit Index
Reference for the Goodness of fit for CFA
StatSoft, Inc. (2005). STATISTICA electronic manual. Tulsa OK: Author.
Issue?
• When does a study call for an exploratory of confirmatory factor analysis?
If you wish to restrict the number of factors extracted to a particular number and specify particular patterns of relationship between measured variables and common factors, and this is done a priori (before seeing the data), then the confirmatory procedure is for you. If you have no such well specified a priori restrictions, then use the exploratory procedure.