Discriminant validity
In psychology, discriminant validity or divergent validity tests whether concepts or measurements that are not supposed to be related are, in fact, unrelated.[1]
Campbell and Fiske (1959) introduced the concept of discriminant validity within their discussion on evaluating test validity. They stressed the importance of using both discriminant and convergent validation techniques when assessing new tests. A successful evaluation of discriminant validity shows that a test of a concept is not highly correlated with other tests designed to measure theoretically different concepts.
In showing that two scales do not correlate, it is necessary to correct for attenuation in the correlation due to measurement error. It is possible to calculate the extent to which the two scales overlap by using the following formula where is correlation between x and y, is the reliability of x, and is the reliability of y:
Although there is no standard value for discriminant validity, a result less than .85 tells us that discriminant validity likely exists between the two scales. A result greater than .85, however, tells us that the two constructs overlap greatly and they are likely measuring the same thing. Therefore, we cannot claim discriminant validity between them.
Consider researchers developing a new scale designed to measure Narcissism. They may want to show discriminant validity with a scale measuring Self-esteem. Narcissism and Self-esteem are theoretically different concepts, and therefore it is important that the researchers show that their new scale measures Narcissism and not simply Self-esteem.
First, we can calculate the Average Inter-Item Correlations within and between the two scales:
- Narcissism — Narcissism: 0.47
- Narcissism — Self-esteem: 0.30
- Self-esteem — Self-esteem: 0.52
We then use the correction for attenuation formula:
Since 0.607 is less than 0.85, we can conclude that discriminant validity exists between the scale measuring narcissism and the scale measuring self-esteem. The two scales measure theoretically different constructs.
Recommended approaches to test for discriminant validity on the construct level are AVE-SE comparisons (Fornell & Larcker, 1981; note: hereby the measurement error-adjusted inter-construct correlations derived from the CFA model should be used rather than raw correlations derived from the data.)[2] and the assessment of the HTMT ration (Henseler et al., 2014).[3] Simulation tests reveal that the former performs poorly for variance-based structural equation models (SEM), e.g. PLS, but well for covariance-based SEM, e.g. Amos, and the latter performs well for both types of SEM.[3][4] Voorhees et al. (2015) recommend combining both methods for covariance-based SEM with a HTMT cutoff of 0.85.[4] A recommended approach to test for discriminant validity on the item level is exploratory factor analysis (EFA).
See also
- Construct validity
- Concurrent validity
- Validity (statistics)
- Convergent validity
- Multitrait-multimethod matrix
References
- ↑ http://www.experiment-resources.com/convergent-validity.html
- ↑ Claes Fornell, David F. Larcker: Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. In: Journal of Marketing Research. 18, Februar 1981, S. 39-50.
- 1 2 Henseler, J., Ringle, C.M., Sarstedt, M., 2014. A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the Academy of Marketing Science 43 (1), 115–135.
- 1 2 Voorhees, C.M., Brady, M.K., Calantone, R., Ramirez, E., 2015. Discriminant validity testing in marketing: an analysis, causes for concern, and proposed remedies. Journal of the Academy of Marketing Science 1–16.
- Campbell, D. T., & Fiske, D. W. (1959). Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bulletin, 56, 81-105.
- John, O. P., & Benet-Martinez, V. (2000). Measurement: Reliability, construct validation, and scale construction. In H. T. Reis & C. M. Judd (Eds.), Handbook of research methods in social psychology (pp. 339–369). New York: Cambridge University Press.