期刊名称:Communications of the Association for Information Systems
印刷版ISSN:1529-3181
出版年度:2020
卷号:47
期号:1
页码:31-95
DOI:10.17705/1CAIS.04736
出版社:Association for Information Systems
摘要:Empirical research in information systems relies heavily on developing and validating survey instruments. However, researchers’ efforts to evaluate content validity of survey scales are often inconsistent, incomplete, or unreported. Thjs paper defines and describes the most significant facets of content validity and illustrates the mechanisms through which multi-item psychometric scales capture a latent construct’s content. We discuss competing methods and propose new methods to assemble a comprehensive set of metrics and methods to evaluate content validity. The resulting recommendations for researchers evaluating content validity emphasize an iterative pre-study process (wash, rinse, and repeat until clean) to objectively establish “fit for purpose” when developing and adapting survey scales. A sample pre-study demonstrates suitable methods for creating confidence that scales reliably capture the theoretical essence of latent constructs. We demonstrate the efficacy of these methods using a randomized field experiment.