Date post: | 14-Oct-2014 |
Category: |
Documents |
Upload: | muhammad-kashif |
View: | 38 times |
Download: | 0 times |
Introduction to Structural Equation Modelling
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Joaquín Aldás Manzano Universitat de València * [email protected]
1
Partial Least Squares Path Modelling (PLSPM)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares (PLS)
■ In 1960 Karl Jöreskog proposed an algorithm to estimate covariance based structural models: maximum likelihood algorithm (ML)
■ in 1970 ML algorithm was made operational in the first widely used commercial software: LISREL
■ Herman Wold (Jöreskog thesis supervisor) questioned the applicability of covariance structure models estimation via ML because of its heavy distribution assumptions and high sample sizes needed.
■ Wold proposed an alternative approach to these situations: Partial Least Squares
■ The first general PLS algorithm Wold (1973) offered was called NIPALS (=Non linear Iterative PArtial Least Squares) and a final presentation of the PLS approach to path modelling with latent variables is present in Wold’s (1982) paper.
2
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
3
Covariance based SEM rationale
X1 X2 X3 X4
X1 1.00
X2 .087 1.00
X3 .140 .080 1.00
X4 .152 .143 .272 1.00
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
4
X1 X2 X3 X4
X1 1.00
X2 .087 1.00
X3 .140 .080 1.00
X4 .152 .143 .272 1.00
.83
.33 .26 .46 .59
1 20.33 0.26 0.085 0.087 X Xρ× = =;
1 30.33 0.83 0.46 0.126 0.140 X Xρ× × = =;
Covariance based SEM rationale
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
5
CBSEM solution vs. PLSPM solution
.83
.33 .26 .46 .59
.22
.75 .60 .54 .71
CBSEM solution PLSPM solution
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
6
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Errors
Manifest variables or indicators of the dependent construct
Manifest variables or indicators of the independent construct
Partial Least Squares
7
Terminology
Independent Latent variable
Dependent Latent variable
Independent Latent variable
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Weights Loadings
Partial Least Squares
8
Terminology
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
9
• Manifest variable indicators
Raw data
• Initialization 1
• Inner weights estimation 2
• Inside approximation 3
• Outer weights estimation 4
• Outside approximation 5
• Stop criterion 6 • Estimation of loadings, weights and path coefficients 7
PLSPM algorithm rationale (Fornell, Barclay & Rhee, 1988; Wold, 1966)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
10
• ξis calculated adding its indicators (weights and loadings = 1)
Step1
Initialization
!1outer = x11 + x12
!2outer = x21 + x22
!3outer = x31 + x32 + x33
w11 = w12 = w21 = w31 = w32 = w33 = 1
PLSPM algorithm rationale (Fornell, Barclay & Rhee, 1988; Wold, 1966)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
11
• Covariance between adjacent LV is calculated
Step 2
Inner weights estimation
eij =cov(!i
outer ,! jouter ) !i ,! j adjacent
0 rest of cases
"#$
%$
PLSPM algorithm rationale (Fornell, Barclay & Rhee, 1988; Wold, 1966)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
12
!iinner = eij
j" ! j
outer
!1inner = e13!3
outer
!2inner = e23!3
outer
!3inner = e13!1
outer + e23!2outer
#
$%%
&%%
• LV estimation from their previous outer estimation
Step 3
Inner approximation
PLSPM algorithm rationale (Fornell, Barclay & Rhee, 1988; Wold, 1966)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
13
x jh = cjh + wjh! jinner + " jh
x31 = c31 + w31!3inner + "31
x32 = c32 + w32!3inner + "32
x33 = c33 + w33!3inner + "33
#
$%%
&%%' w31, w32 , w33
• Weights and loading estimation Paso 4
Outer weights estimation
Reflective
PLSPM algorithm rationale (Fornell, Barclay & Rhee, 1988; Wold, 1966)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
14
! jinner = cj + wjhx jh
h" + # j
!1inner = c1 + w11x11 + w12x12 + #1
!2inner = c2 + w21x21 + w22x22 + #2
$%&
'&( w11, w12 , w21, w22
Formative
• Weights and loading estimation Paso 4
Outer weights estimation
PLSPM algorithm rationale (Fornell, Barclay & Rhee, 1988; Wold, 1966)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
15
• ξ is calculated from the last loadings & weights estimation
Step 5
Outer approximation
1 11 11 12 12
2 21 21 22 22
3 31 31 32 32 33 33
ˆ
ˆ
ˆ
ˆ
outerj jh jh
h
outer
outer
outer
w x
w x w x
w x w x
w x w x w x
ξ
ξ
ξ
ξ
=
⎧ = +⎪⎪ = +⎨⎪ = + +⎪⎩
∑
PLSPM algorithm rationale (Fornell, Barclay & Rhee, 1988; Wold, 1966)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
16
wjhk( ) ! wjh
k!1( ) < 10!5
h, j"
w11k( ) ! w11
k!1( ) + w12k( ) ! w12
k!1( ) +
w21k( ) ! w21
k!1( ) + w22k( ) ! w22
k!1( ) +
w31k( ) ! w31
k!1( ) + w32k( ) ! w32
k!1( ) + w33k( ) ! w33
k!1( ) < 10!5
• We stop iterating if stop criterion holds
Step 6
Stop criterion
PLSPM algorithm rationale (Fornell, Barclay & Rhee, 1988; Wold, 1966)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
17
• Final parameter estimation is performed
Step 7
Final estimation
PLSPM algorithm rationale (Fornell, Barclay & Rhee, 1988; Wold, 1966)
! j = ! j
outer
x jh = cjh + ! jh" j + # jh reflective
" j = cj + $ jhx jh +h% # j formative
! j = "ij!i +
i# $ j
Latent variable scores
Structural paths
Loadings
Weights
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Structural part of the model must be recursive ► Non-recursive paths are not allowed ► No logical loops
■ Each latent variable must be connected at least to another LV (*) ■ Each LV must be measured by at least one indicator (second order factors?) ■ An indicator can be associated only to one LV ■ The model cannot be formed by non-connected blocks (*)
18
PLSPM specification restrictions
(*) SmartPLS restrictions, not necessarily PLSPM restrictions
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Small although representative sample sizes (to be clarified) ■ The phenomenon in question is relatively new or changing and
the theoretical model or measures are not well formed, ■ The data conditions relating to normal distribution,
independence, and/or sample size are not met ■ There is an epistemic need to model the relationship between
LVs and indicators using reflective measures ■ The objective is prediction
19
When using PLSPM instead of CBSEM? Chin y Newsted (1999; p.337)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
20
What means a ‘small’ sample size?
■ Number of indicators of the most complex forrmative LV (5)
■ Higher number or LV that point to a dependent LV (3)
Minimum sample size is 10 times the higher of the previous calculations:
5 10 50× =
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Previous criterion is the classical ‘rule of thumb’, but power analysis must be considered
■ Test power: ► Probability of rejecting the null hypothesis when it is false
▬ α (Type I error): probability of rejecting the null hypothesis when it is true ▬ β (Type II error): probability of not rejecting the null hypothesis when it is
false ▬ 1−β (Power): probability of rejecting the null hypothesis when it is false
► In Social Sciences, Minimum 80% (Cohen, 1988) ■ Does the previous rule of thumb meet this criterion? ■ For 5 predictors (our model), α =.05 y and a moderate effect size, the power of
the test would be:
21
What means a ‘small’ sample size? Chin & Newsted (1999; p.314); Hair, Anderson, Tatham & Black (1995; p.2 y p.10-13)
N 1−β
20 .16
40 .37
50 .47
60 .57
100 .84
L
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Use a good power analysis software like G*Power ■ Can be downloaded for free at:
► http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3/ ■ Read the paper:
► Faul, F., Erdfelder, E., Lang, A.-G., & Buchner, A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39, 175-191.
22
Recommendation for performing power analysis
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
23
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Data screening ► Missing data imputation with adequate software, as listwise deletion is the
default for SmartPLS and only media imputation is available in actual beta. ■ Draw the model ■ Run the model
► Very few options ► Only weighting scheme is configurable (marginal effect on the result)
■ Validating the measurement model ► LV Reliability (Cronbach’s alpha) ► LV Reliability (Composite reliability) ► Convergent validity (loadings size and significance, AVE, cross-loadings) ► Discriminant validity (AVE vs. LV correlation)
■ Structural model evaluation ► Explained variance of dependent LV (R squared) ► Predictive relevance (Q2 using blindfolding) ► Structural paths significance (bootstrapping)
■ Global fit ► No indices
24
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Data screening ► Missing data imputation with adequate software, as listwise deletion is the
default for SmartPLS and only media imputation is available in actual beta.
25
An annotated example Sanz, Ruiz, Aldás (2008)
Missing value coding schema must be indicated to the software. -1 coding used here
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
26
An annotated example Sanz, Ruiz, Aldás (2008)
■ Draw the model
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Run the model ► Very few options ► Only weighting scheme is configurable (marginal effect on the result)
27
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Validating the measurement model
► Reliability (Cronbach’s alpha) ▬ Benchmark >.70 (Nunnally & Bernstein, 1994)
► Composite reliability (Wets, Linn, Joreskog, 1974; Fornell & Larcker, 1981) ▬ Only applied to reflective constructs, not to formative ones (Chin, 1998) ▬ Benchmark > 0.6 (Bagozzi & Yi, 1988) ▬ Calculated for each LV
▬ Where Lij is the standardized loading of item j on factor i ▬ Var (Eij) is the error variance, calculated as:
28
Var Eij( ) = 1! Lij
2
An annotated example Sanz, Ruiz, Aldás (2008)
CRL
L Var E
ijj
ijj
ijj
=
+ ( )
2
2
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Validating the measurement model
29
ACTITUD DEPEND FACUSO INTCOMP UTIPER DEP1 0,5274 DEP2 0,3278 DEP3 0,4300 DEP4 0,9250 DEP5 0,6135 DEP6 0,5955 p.03.01 0,5714p.03.02 0,7917p.03.03 0,8203p.03.04 0,8098p.03.05 0,7766p.03.06 0,7976p.03.07 0,8198 p.03.08 0,4614 p.03.09 0,6897 p.03.10 0,7444 p.03.11 0,3946 p.03.12 0,7441 p.04.01 0,6060 p.04.02 0,6617 p.04.03 0,2838 p.04.04 0,7607 p.04.05 0,8330 p.04.06 0,8042 p.04.07 0,7570 p.04.08 0,5889 p.04.09 0,4969 p.04.10 0,6737 p.05 1,0000
Outer loadings
CR=.8938
CR =0.5714+ 0.7917+ 0.8203+ 0.8098+ 0.7766+ 0.7976( )2
0.5714+ 0.7917+ 0.8203+ 0.8098+ 0.7766+ 0.7976( )2 + (1! 0.5714)2 + (1! 0.7917)2 + (1! 0.8203)2 + (1! 0.8098)2 + (1! 0.7766)2 + (1! 0.7976)2= 0.8938
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Validating the measurement model
► SmartPLS automatically calculates CR and Cronbach’s alphas:
30
AVE Composite ReliabilityR Square Cronbachs Alpha Communality RedundancyACTITUD 0,4425 0,8823 0,4375 0,8518 0,4425 0,0708 DEPEND 0,4047 0,3595 0,0412 FACUSO 0,4374 0,8148 0,7616 0,4374 INTCOMP 1,0000 1,0000 0,3210 1,0000 1,0000 0,1940 UTIPER 0,5869 0,8938 0,2401 0,8557 0,5869 0,1402
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Validating the measurement model
► Reliability ▬ Average Variance Extracted criterion (Fornell & Larcker, 1981):
▬ Same notation CR ▬ Benchmark > 0,5 (Fornell & Larcker, 1981) ▬ Only for reflective LV, not formative ones (Chin, 1998)
31
AVEi =Lij
j! 2
Lijj! 2
+ Var Eij( )j!
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Validating the measurement model
32
ACTITUD DEPEND FACUSO INTCOMP UTIPER DEP1 0,5274 DEP2 0,3278 DEP3 0,4300 DEP4 0,9250 DEP5 0,6135 DEP6 0,5955 p.03.01 0,5714p.03.02 0,7917p.03.03 0,8203p.03.04 0,8098p.03.05 0,7766p.03.06 0,7976p.03.07 0,8198 p.03.08 0,4614 p.03.09 0,6897 p.03.10 0,7444 p.03.11 0,3946 p.03.12 0,7441 p.04.01 0,6060 p.04.02 0,6617 p.04.03 0,2838 p.04.04 0,7607 p.04.05 0,8330 p.04.06 0,8042 p.04.07 0,7570 p.04.08 0,5889 p.04.09 0,4969 p.04.10 0,6737 p.05 1,0000
Outer loadings
AVE=.5869
AVE =
0.57142 + 0.79172 + 0.82032 + 0.80982 + 0.77662 + 0.79762
0.57142 + 0.79172 + 0.82032 + 0.80982 + 0.77662 + 0.79762 + (1! 0.5714)2 + (1! 0.7917)2 + (1! 0.8203)2 + (1! 0.8098)2 + (1! 0.7766)2 + (1! 0.7976)2 = 0.5896
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Validating the measurement model
► SmartPLS automatically calculates AVE
33
AVE Composite ReliabilityR Square Cronbachs Alpha Communality RedundancyACTITUD 0,4425 0,8823 0,4375 0,8518 0,4425 0,0708 DEPEND 0,4047 0,3595 0,0412 FACUSO 0,4374 0,8148 0,7616 0,4374 INTCOMP 1,0000 1,0000 0,3210 1,0000 1,0000 0,1940 UTIPER 0,5869 0,8938 0,2401 0,8557 0,5869 0,1402
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Validating the measurement model
► Convergent validity (Same criteria than those from SEM) ▬ Benchmark > 0.7 (Carmines & Zeller, 1979) ▬ Some researchers believe this criterion is too strict when new scales are
being developed (Barclay, Higgins, Thompson, 1995; Chin, 1998) ▬ Bagozzi & Yi (1988), for instance, propose > 0.6 ▬ Notice! Formative constructs are evaluated in terms of their weights
(Chin, 1998), focusing in their significance and not in their size. Multicollinearity must be tested.
▬ And what about significance? We will go back to it later.
34
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Validating the measurement model
35
An annotated example Sanz, Ruiz, Aldás (2008)
ACTITUD DEPEND FACUSO INTCOMP UTIPER DEP1 0,5274 DEP2 0,3278 DEP3 0,4300 DEP4 0,9250 DEP5 0,6135 DEP6 0,5955 p.03.01 0,5714p.03.02 0,7917p.03.03 0,8203p.03.04 0,8098p.03.05 0,7766p.03.06 0,7976p.03.07 0,8198 p.03.08 0,4614 p.03.09 0,6897 p.03.10 0,7444 p.03.11 0,3946 p.03.12 0,7441 p.04.01 0,6060 p.04.02 0,6617 p.04.03 0,2838 p.04.04 0,7607 p.04.05 0,8330 p.04.06 0,8042 p.04.07 0,7570 p.04.08 0,5889 p.04.09 0,4969 p.04.10 0,6737 p.05 1,0000 Outer loadings
UTIPER1 should be dropt off
Formative constructs: focus on WEIGHTS not on loadings
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Validating the measurement model
► Convergent validity ▬ In SEM Lagrange multiplier test allowed us to detect items with
significant loadings in more than one LV, what is a threat to convergent validity
▬ PLS offers cross-loadings of one indicator on all the LV. If the loading of one item on a factor different to the one it was designed for is higher than the loading on that factor, the item should be deleted.
▬ Directly calculated by SmartPLS. ▬ Only apply to reflective constructs.
36
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Validating the measurement model
37
ACTITUD DEPEND FACUSO INTCOMP UTIPER DEP1 0,2843 0,5274 0,2783 0,2490 0,2735 DEP2 0,2736 0,3278 0,0460 0,0335 0,2142 DEP3 0,3092 0,4300 0,1283 0,0911 0,2691 DEP4 0,4620 0,9250 0,4160 0,3823 0,5852 DEP5 0,3779 0,6135 0,2156 0,1844 0,3926 DEP6 0,3635 0,5955 0,2098 0,2053 0,3636p.03.01 0,4120 0,2931 0,2941 0,3414 0,5714p.03.02 0,5003 0,5620 0,4143 0,4306 0,7917p.03.03 0,4848 0,5152 0,4089 0,3886 0,8203p.03.04 0,5004 0,4562 0,4371 0,4497 0,8098p.03.05 0,4686 0,4622 0,3351 0,3521 0,7766p.03.06 0,5295 0,4980 0,3454 0,4032 0,7976p.03.07 0,3378 0,3596 0,8198 0,3756 0,3862p.03.08 0,0874 0,0832 0,4614 0,2349 0,0687p.03.09 0,2638 0,2390 0,6897 0,3333 0,3174p.03.10 0,3360 0,3571 0,7444 0,3577 0,3976p.03.11 0,0771 0,0938 0,3946 0,1633 0,0000p.03.12 0,3617 0,3942 0,7441 0,3155 0,4214p.04.01 0,6060 0,3289 0,1553 0,2226 0,3559p.04.02 0,6617 0,4683 0,4597 0,3545 0,4331p.04.03 0,2838 0,1295 0,0061 -0,0157 0,1166p.04.04 0,7607 0,3674 0,2600 0,3728 0,4526p.04.05 0,8330 0,4246 0,3923 0,3995 0,5403p.04.06 0,8042 0,4296 0,4008 0,4752 0,5991p.04.07 0,7570 0,3454 0,1740 0,3224 0,3754p.04.08 0,5889 0,2390 0,2127 0,3278 0,4138p.04.09 0,4969 0,2470 0,0919 0,2122 0,3280p.04.10 0,6737 0,3567 0,3685 0,3572 0,3683 p.05 0,4981 0,4096 0,4597 1,0000 0,5168
UTIPER
FACUSO
ACTITUD
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Validating the measurement model
► Discriminant validity ▬ Average variance extracted criterion (Fornell & Larcker, 1981) ▬ One LV should share more variance with its indicators than with other LV ▬ Squared correlation between two LV must be compared to the AVE of
each of the factors ▬ Discriminant validity holds if:
▬ Only applies to reflective constructs, not formative ones (Chin, 1998)
38
AVEi > !ij2
AVE j > !ij2
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Validating the measurement model
► SmartPLS offers all the information we need: AVE for each latent variable and the correlations among factors
► As a matter of example, for the two critical factors (higher correlation):
39
An annotated example Sanz, Ruiz, Aldás (2008)
ACTITUD DEPEND FACUSO INTCOMP UTIPERACTITUD 1,0000 DEPEND 0,5224 1,0000 FACUSO 0,4246 0,4446 1,0000 INTCOMP 0,4981 0,4096 0,4597 1,0000 UTIPER 0,6315 0,6145 0,4900 0,5168 1,0000
AVEACTITUD 0,4425 DEPEND 0,0000 FACUSO 0,4374INTCOMP 1,0000 UTIPER 0,5869
!utiper"actitud2 = 0.63152 = 0.3988
AVEutiper = 0.5869 > 0.3988
AVEactitud = 0.4425 > 0.3988
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Evaluating the structural model ■ Explained variance of the dependent LV (R squared)
► Based on each regression R2
► R2 are interpreted as in any multiple regression: the amount of variance of the dependent LV explained by the set of independent LV
► R2 should never be < 0.1 (Falk & Miller, 1992) ► SmartPLS provides R2 both in the output and in the charts ► My opinion: Falk & Miller (1992) is an arbitrary criterion, power analysis
should be added
40
An annotated example Sanz, Ruiz, Aldás (2008)
AVE Composite ReliabilityR Square Cronbachs Alpha Communality RedundancyACTITUD 0,4425 0,8823 0,4375 0,8518 0,4425 0,0708 DEPEND 0,4047 0,3595 0,0412 FACUSO 0,4374 0,8148 0,7616 0,4374 INTCOMP 1,0000 1,0000 0,3210 1,0000 1,0000 0,1940 UTIPER 0,5869 0,8938 0,2401 0,8557 0,5869 0,1402
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
41
R2
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Evaluating the structural model ■ Predictive relevance (Q2 using blindfolding)
► R2 size must be complemented (if not substituted) by Stone (1974) & Geisser (1975) resampling proposal.
► Blindfolding, implies deleting one part of the data of a dependent LV and estimating them again using the structural part of the model. This process repeats until each data point has been deleted and estimated.
► Omission distance D has to be decided. It indicates the percentage of data that are going to be deleted in each step. It must not be a perfect divisor of the sample size. Wold (1982) recommends numbers between 5 and 10.
► Let us assume that case n of variable k has been omitted. We can estimate it by regressing that variable on the rest of the LV attending the model structure. But as we know its true value, we can calculate the difference. Adding those differences for all the omitted data for variable k we get:
42
Ek = ykn ! ykn( )2
n=1
N
"
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Evaluating the structural model ■ Predictive relevance (Q2 using blindfolding)
► Now let us assume that, instead of estimating it by using the structural part of the model, we estimate it ignoring the model by using variable k mean. Once again we can calculate de difference and repeating it for all the omitted items:
► Q2 statistic for each dependent LV can be calculated as follow:
► Q2 is a measure of how well the observed values can be replicated using the estimated parameters: ▬ Q2>0 implies the model has predictive relevance regarding variable k ▬ Q2<0 implies the model lacks of predictive relevance regarding variable k
43
Ok = ykn ! ykn( )2
n=1
N
"
Qk
2 = 1!Ek
Ok
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
44
■ Evaluating the structural model ■ Predictive relevance (Q2 using blindfolding)
► SmartPLS directly provides Q2 values for all the dependent LV
Total SSO SSE 1-SSE/SSOACTITUD 4650,0000 3805,8981 0,1815 DEPEND 2790,0000 2414,2583 0,1347INTCOMP 465,0000 326,3057 0,2983 UTIPER 2790,0000 2414,4798 0,1346
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Evaluating the structural model ► Analysing the significance of structural paths (bootstrapping)
■ Evaluating the measurement model ► Convergent validity: testing the significance of loadings (bootstrapping)
■ Bootstrapping ► It is a resampling procedure through which N random samples are generated
from the original sample with replacement. ► Average values of the parameters estimated in each sample are calculated
and compared to the parameters obtained in the original sample. ► Big differences lead to no significant parameters. ► Number of subsamples N must be high (around N=500) ► Each subsample must have the same sample size that the original sample.
45
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
46
■ Bootstrapping
1 1 1 1 1 1 1
2 2 2 2 2 2 2
3 3 3 3 3 3 3
4 4 4 4 4 4 4
20 20 20 20 20 20 20···
2 2 2 2 2 2 2
3 3 3 3 3 3 3
4 4 4 4 4 4 4
20 20 20 20 20 20 20
···
20 20
4 4 4 4 4 4 4
Muestra original Una de la N submuestras
An annotated example Sanz, Ruiz, Aldás (2008)
Original sample One of the N subsamples
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Bootstrapping
47
Outer loadings Original Sample Standard T
Sample Mean Error Statistics
DEP1 -> DEPEND 0,5274 0,5192 0,0727 7,2547
DEP2 -> DEPEND 0,3278 0,3213 0,0762 4,3029
DEP3 -> DEPEND 0,43 0,423 0,0777 5,5345
DEP4 -> DEPEND 0,925 0,9149 0,0266 34,7133
DEP5 -> DEPEND 0,6135 0,6033 0,0673 9,121
DEP6 -> DEPEND 0,5955 0,5856 0,0616 9,6686
p.03.01 <- UTIPER 0,5714 0,5668 0,0449 12,7149
p.03.02 <- UTIPER 0,7917 0,7911 0,0205 38,5845
p.03.03 <- UTIPER 0,8203 0,8193 0,019 43,2043
p.03.04 <- UTIPER 0,8098 0,8079 0,0209 38,8366
p.03.05 <- UTIPER 0,7766 0,7727 0,0256 30,3694
p.03.06 <- UTIPER 0,7976 0,7945 0,0223 35,7663
p.03.07 <- FACUSO 0,8198 0,8163 0,0204 40,2304
p.03.08 <- FACUSO 0,4614 0,4544 0,0808 5,7132
p.03.09 <- FACUSO 0,6897 0,685 0,0432 15,9552
p.03.10 <- FACUSO 0,7444 0,7425 0,03 24,8401
p.03.11 <- FACUSO 0,3946 0,3861 0,0907 4,3502
p.03.12 <- FACUSO 0,7441 0,7426 0,0286 25,9782
p.04.01 <- ACTITUD 0,606 0,6014 0,0447 13,5555
p.04.02 <- ACTITUD 0,6617 0,6619 0,0389 17,0137
p.04.03 <- ACTITUD 0,2838 0,2813 0,0739 3,8391
p.04.04 <- ACTITUD 0,7607 0,7576 0,0289 26,3512
p.04.05 <- ACTITUD 0,833 0,8304 0,017 49,0118
p.04.06 <- ACTITUD 0,8042 0,8032 0,0205 39,2634
p.04.07 <- ACTITUD 0,757 0,7543 0,0299 25,2988
p.04.08 <- ACTITUD 0,5889 0,5857 0,0417 14,1064
p.04.09 <- ACTITUD 0,4969 0,4916 0,0464 10,7014
p.04.10 <- ACTITUD 0,6737 0,6717 0,0384 17,5488
p.05 <- INTCOMP 1 1 0 0
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Bootstrapping
48
Outer weights Original Sample Standard T
Sample Mean Error Statistics
DEP1 -> DEPEND 0,5274 0,5192 0,0727 7,2547
DEP2 -> DEPEND 0,3278 0,3213 0,0762 4,3029
DEP3 -> DEPEND 0,43 0,423 0,0777 5,5345
DEP4 -> DEPEND 0,925 0,9149 0,0266 34,7133
DEP5 -> DEPEND 0,6135 0,6033 0,0673 9,121
DEP6 -> DEPEND 0,5955 0,5856 0,0616 9,6686
p.03.01 <- UTIPER 0,5714 0,5668 0,0449 12,7149
p.03.02 <- UTIPER 0,7917 0,7911 0,0205 38,5845
p.03.03 <- UTIPER 0,8203 0,8193 0,019 43,2043
p.03.04 <- UTIPER 0,8098 0,8079 0,0209 38,8366
p.03.05 <- UTIPER 0,7766 0,7727 0,0256 30,3694
p.03.06 <- UTIPER 0,7976 0,7945 0,0223 35,7663
p.03.07 <- FACUSO 0,8198 0,8163 0,0204 40,2304
p.03.08 <- FACUSO 0,4614 0,4544 0,0808 5,7132
p.03.09 <- FACUSO 0,6897 0,685 0,0432 15,9552
p.03.10 <- FACUSO 0,7444 0,7425 0,03 24,8401
p.03.11 <- FACUSO 0,3946 0,3861 0,0907 4,3502
p.03.12 <- FACUSO 0,7441 0,7426 0,0286 25,9782
p.04.01 <- ACTITUD 0,606 0,6014 0,0447 13,5555
p.04.02 <- ACTITUD 0,6617 0,6619 0,0389 17,0137
p.04.03 <- ACTITUD 0,2838 0,2813 0,0739 3,8391
p.04.04 <- ACTITUD 0,7607 0,7576 0,0289 26,3512
p.04.05 <- ACTITUD 0,833 0,8304 0,017 49,0118
p.04.06 <- ACTITUD 0,8042 0,8032 0,0205 39,2634
p.04.07 <- ACTITUD 0,757 0,7543 0,0299 25,2988
p.04.08 <- ACTITUD 0,5889 0,5857 0,0417 14,1064
p.04.09 <- ACTITUD 0,4969 0,4916 0,0464 10,7014
p.04.10 <- ACTITUD 0,6737 0,6717 0,0384 17,5488
p.05 <- INTCOMP 1 1 0 0
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Bootstrapping
49
Path coefficients
Original Sample Standard T Statistics Sample (O) Mean (M) Error (STERR) (|O/STERR|)
ACTITUD -> INTCOMP 0,2655 0,2673 0,0591 4,4887 DEPEND -> ACTITUD 0,1891 0,1944 0,0502 3,7677 DEPEND -> INTCOMP 0,0906 0,0968 0,0593 1,5277 FACUSO -> ACTITUD 0,1159 0,1214 0,0479 2,4185 FACUSO -> DEPEND 0,1888 0,1918 0,0524 3,5989 FACUSO -> UTIPER 0,49 0,4961 0,0418 11,7114 UTIPER -> ACTITUD 0,4585 0,4513 0,0537 8,5314 UTIPER -> DEPEND 0,522 0,5234 0,044 11,8668 UTIPER -> INTCOMP 0,2935 0,2834 0,0583 5,0363
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Measurement model evaluation ► Indicator 1 of usefulness is deleted (loading <.70) ► Indicators 2 and 5 of ease of use are deleted (loading <.70) ► No reliability problems ► Regarding convergent validity some problems arise with AVE, but we wait to
see what happens after deleting de above mentioned items ► No problems of discriminant validity
■ Structural model evaluation ► We must not do anything until the measurement model is considered the
definitive one. ■ Once modified the measurement model, the estimation is performed again
50
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
51
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
52
ACTITUD DEPEND FACUSO INTCOMP UTIPER DEP1 0,5275 DEP3 0,4170 DEP4 0,9233 DEP5 0,6106 DEP6 0,5986 p.03.02 0,7947 p.03.03 0,8339 p.03.04 0,8122 p.03.05 0,7910 p.03.06 0,8119 p.03.07 0,8146 p.03.09 0,6910 p.03.10 0,7508 p.03.12 0,7547 p.04.04 0,7914 p.04.05 0,8645 p.04.06 0,8207 p.04.07 0,7614 p.04.10 0,7110 p.05 1,0000
Convergent validity Outer loadings
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
53
AVE Composite Reliability R Square Cronbachs Alpha Communality Redundancy ACTITUD 0,6265 0,8931 0,3874 0,8506 0,6265 0,0834 DEPEND 0,4112 0,4072 0,0539 FACUSO 0,5686 0,8401 0,7480 0,5686 INTCOMP 1,0000 1,0000 0,3193 1,0000 1,0000 0,1995 UTIPER 0,6543 0,9044 0,2512 0,8679 0,6543 0,1633
Reliability: CR, Cronbach’s alpha, AVE
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
54
ACTITUD DEPEND FACUSO INTCOMP UTIPER DEP1 0,2784 0,5275 0,2772 0,2490 0,2676 DEP3 0,2810 0,4170 0,1448 0,0911 0,2670 DEP4 0,4289 0,9233 0,4255 0,3823 0,5889 DEP5 0,3501 0,6106 0,2365 0,1844 0,3986 DEP6 0,3444 0,5986 0,2264 0,2053 0,3724 p.03.02 0,4843 0,5633 0,4273 0,4306 0,7947 p.03.03 0,4630 0,5155 0,4253 0,3886 0,8339 p.03.04 0,4795 0,4548 0,4537 0,4497 0,8122 p.03.05 0,4553 0,4593 0,3552 0,3521 0,7910 p.03.06 0,5066 0,4981 0,3582 0,4032 0,8119 p.03.07 0,3237 0,3624 0,8146 0,3756 0,3733 p.03.09 0,2492 0,2397 0,6910 0,3333 0,3064 p.03.10 0,3246 0,3599 0,7508 0,3577 0,3966 p.03.12 0,3640 0,3946 0,7547 0,3155 0,4182 p.04.04 0,7914 0,3652 0,2704 0,3728 0,4330 p.04.05 0,8645 0,4226 0,4021 0,3995 0,5312 p.04.06 0,8207 0,4288 0,4080 0,4752 0,5901 p.04.07 0,7614 0,3414 0,1866 0,3224 0,3648 p.04.10 0,7110 0,3564 0,3663 0,3572 0,3629 p.05 0,4942 0,4126 0,4566 1,0000 0,5023
Convergent validity: crossloadings
UTIPER
FACUSO
ACTITUD
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
55
AVE Composite Reliability R Square Cronbachs Alpha Communality Redundancy ACTITUD 0,6265 0,8931 0,3874 0,8506 0,6265 0,0834 DEPEND 0,4112 0,4072 0,0539 FACUSO 0,5686 0,8401 0,7480 0,5686 INTCOMP 1,0000 1,0000 0,3193 1,0000 1,0000 0,1995 UTIPER 0,6543 0,9044 0,2512 0,8679 0,6543 0,1633
ACTITUD FACUSO INTCOMP UTIPER ACTITUD 1,0000 FACUSO 0,4242 1,0000 INTCOMP 0,4942 0,4566 1,0000 UTIPER 0,5911 0,5012 0,5023 1,0000
LV
ACTITUD .79
FACUSO .75
INTCOMP N/A
UTIPER .81
AVE
Discriminant validity AVE vs. correlations
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
56
Structural model R2
AVE Composite Reliability R Square Cronbachs Alpha Communality Redundancy ACTITUD 0,6265 0,8931 0,3874 0,8506 0,6265 0,0834 DEPEND 0,4112 0,4072 0,0539 FACUSO 0,5686 0,8401 0,7480 0,5686 INTCOMP 1,0000 1,0000 0,3193 1,0000 1,0000 0,1995 UTIPER 0,6543 0,9044 0,2512 0,8679 0,6543 0,1633
Fuente: Hair, Black, Babin, Anderson y Tatham (2006; p.195)
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
57
Structural model Q2, predictive relevance
Total SSO SSE 1-SSE/SSO ACTITUD 2325,0000 1799,2618 0,2261 DEPEND 2325,0000 1947,7690 0,1622 INTCOMP 465,0000 323,8041 0,3036 UTIPER 2325,0000 1981,4057 0,1478
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Bootstrapping
58
Outer loadings
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Bootstrapping
59
An annotated example Sanz, Ruiz, Aldás (2008)
Outer weights
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Bootstrapping
60
Desarrollo de un caso de ejemplo Sanz, Ruiz, Aldás (2008)
Path coefficients
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Results in an academic paper
61
An annotated example Sanz, Ruiz, Aldás (2008)
Factor Indicator Loading Weight t value CA CR AVE
F1. Usefulness UTPER2 0,795*** 42,02
0,868 0,904 0,654
UTPER3 0,834*** 53,33 UTPER4 0,812*** 44,17 UTPER5 0,791*** 31,73 UTPER6 0,812*** 44,49
F2. Ease of use FACUSO1 0,815*** 41,27
0,748 0,840 0,569
FACUSO3 0,691*** 17,09 FACUSO4 0,751*** 27,26 FACUSO6 0,755*** 29,95
F3. Internet dependence DIM1 0,286*** 4,57
N/A N/A N/A
DIM3 -0,122* 1,64 DIM4 0,732*** 13,71 DIM5 0,168** 1,96 DIM6 0,203** 2,47
F4. Attitude ACT4 0,791*** 32,23
0,851 0,893 0,627
ACT5 0,864*** 62,43 ACT6 0,821*** 44,86 ACT7 0,861*** 27,63 ACT10 0,711*** 20,30
Measurement model: reliability and convergent validity
***p<.01; **p<.05; *p<.10; N/A = Do not apply
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
62
F1 F2 F3 F4 F1 .81 F2 0,501 .75 F3 0,618 0,459 N/A F4 0,591 0,424 0,488 .79
Measurement model: discriminant validity
**p<.01; *p<.05; N/A = Do not apply Below diagonal: correlation among latent variables Diagonal: square root of AVE
■ Results in an academic paper
An annotated example Sanz, Ruiz, Aldás (2008)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
63
Hypothesis Standardized beta
t value
Bootstrap H1: Ease of use à Usefulness 0,501** 13,38 H2: Ease of use à Attitude 0,138** 3,14 H3: Usefulness à Attitude 0,419** 8,34 H4: Usefulness à Intention to buy 0,268** 4,49 H5: Attitude à Intention to buy 0,283** 5,05 H6: Usefulnessà Internet dependency 0,518** 13,54 H7: Ease of use à Internet dependency 0,199** 4,04 H8: Internet dependency à Attitude 0,166** 3,48 H9: Internet dependency àIntention to buy 0,110* 2,08
Hypotheses testing
R2 (Usefulness) = 0,251; R2 (Attitude) = 0,387; R2(Dependency) = 0,411; R2 (Intention) = 0,319; Q2 (Usefulness) = 0,148; Q2 (Attitude) = 0,226; Q2(Dependency) = 0,162; Q2 (Intention) = 0,304; **p<.01; *p<.05
■ Results in an academic paper
An annotated example Sanz, Ruiz, Aldás (2008)
Introduction to Structural Equation Modelling
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Joaquín Aldás Manzano Universitat de València * [email protected]
64
Partial Least Squares Path Modeling (PLSPM) Exercise
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Model ► Read paper from Fornell, Johnson, Anderson, Cha & Bryant (1996)
65
Determinants and consequences of client satisfaction Fornell, Johnson, Anderson, Cha y Bryant (1996)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
■ Questions ► Use data file ecsi.csv ► Draw model of next figure ► Evaluate if sample size is big enough to apply PLSPM ► Evaluate measurement model reliability and validity and estimate
parameters significance using bootstrapping ► Evaluate the structural model ► Create a table to publish the results in an academic paper
66
Determinants and consequences of client satisfaction Fornell, Johnson, Anderson, Cha y Bryant (1996)
■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■ ■
Partial Least Squares
67
Determinants and consequences of client satisfaction Fornell, Johnson, Anderson, Cha y Bryant (1996)