Request

To request a blog written on a specific topic, please email James@StatisticsSolutions.com with your suggestion. Thank you!
Showing posts with label Assessing Reliability and Validity of Constructs and Indicators. Show all posts
Showing posts with label Assessing Reliability and Validity of Constructs and Indicators. Show all posts

Tuesday, March 31, 2009

Assessing Reliability and Validity of Constructs and Indicators

One of the most important advantages offered by latent-variable analyses is the opportunity that is provided to assess the reliability and validity of the study’s variables. In general, reliability refers to consistency of measurement; validity refers to the extent to which an instrument measures what it is intended to measure. For example, a survey is reliable if it provides essentially the same set of responses for a group of respondents upon repeated administration. Similarly, if a scale is developed to measure marketing effectiveness and scores on the scale do in fact reflect respondents’ underlying levels of marketing performance, then the scale is valid. For both reliability and validity, there are a number of different ways that they may be measured.

Indicator reliability. The reliability of an indicator (observed variable) is defined as the square of the correlation (squared multiple correlation or SMC) between a latent factor and that indicator. For instance, looking at Table 1, the standardized loading for the path between Sympathique and F1 is 0.970 and the reliability is 0.939. Looking at the range of indicator reliabilities, many have relative high reliabilities (0.6 and above), however, several have really low reliabilities, like Effacee with an indicator reliability of 0.313.

Composite reliability has been computed for each latent factor included in the model. This index is similar to coefficient and reflects the internal consistency of the indicators measuring a particular factor (Fornell and Larcker, 1981). Both the composite reliability and the variance extracted estimates are shown in Table 1. Fornell and Larcker (1981) recommend a minimum composite reliability of .60. An examination of the composite reliabilities revealed that all meet that minimum acceptable level.

The variance extracted estimates assesses the amount of variance that is explained by an underlying factor in relation to the amount of variance due to measurement error. For instance, the variance estimate for F1 was 0.838, meaning that 83.8% of the variance is explained by the F1 construct, and 16.2% is due to measurement error. Fornell and Larcker (1981) suggest that constructs should exhibit estimates of .50 or larger. Estimates less than .50 indicate that variance due to measurement error is larger than the variance captured by the factor. The variance extracted estimates all meet this minimum threshold, so the validity of the latent construct as well as the associated constructs is acceptable. It should also be noted that Hatcher (1994), cautions that the variances extracted estimate test is conservative; reliabilities can be acceptable even if variances extracted estimates are less than .50.

Convergent validity is present when different instruments are used to measure the same construct and scores from these different instruments are strongly correlated. In contrast, discriminant validity is present when different instruments are used to measure different constructs and the measures of these different constructs are weakly correlated.

In the present study, convergent validity was assessed by reviewing the t-tests for the factor loadings. If all the factor loadings for the indicators were greater than twice their standard errors, the parameter estimates demonstrated convergent validity. That all t-tests are significant showed that all indicators were effectively measuring the same construct (Anderson & Gerbing, 1988). Consider the convergent validity of the ten indicators that measure F1. The results show that the t-values for these ten indicators range from -14.480 to 18.510. These results support the convergent validity of Sympathique, Desagreable, Amicale, Souple, Severe, Autoritaire, Compatissante, au coeur tender, Spontanée, Distante, and Attentive aux autres as measures of F1.

Discriminant validity was assessed through the use of variance extracted test. Constructs were evaluated by comparing the variance extracted estimates for two factors, and then compared with the square of the correlation between the two factors. Discriminant validity is demonstrated if both variance extracted estimates are greater than the squared correlation. In the present study, the correlation between the factors F1 and F2 was 0.154; the squared correlation was 0.024. The correlations and squared correlations are shown in Table 2. The variance extracted estimate was 0.838 for F1 and 0.666 for F2. Because the variance extracted estimates are greater than the square of the interfactor correlation, the test supports the discriminant validity of these two factors. Examination of the other variance extracted estimates and squared correlation coefficients supported discriminant validity within the model.


References

Anderson, J.C. & Gerbing, D.W. (1988). Structural equation modeling in practice: A
review and recommended two-step approach. Psychological Bulletin, 103, 411-423.

Bollen, K.A. (1989). Structural equations with latent variables. New York: John Wiley
& Sons.

Fornell, C. & Larcker, D.F. (1981). Evaluating structural equation models with
unobservable variables and measurement error. Journal of Marketing Research, 18,
39-50.

Hatcher, L. (1994) A step-by-step approach to using SAS for factor analysis and
structural equation modeling. Cary, NC: SAS Institute Inc.

Jöreskog, K.G. & Sörbom, D. (1989). LISREL 7: A guide to the program and
application, 2nd edition. Chicago: SPSS Inc.