Home Survey reliability
Post
Cancel

Survey reliability

Surveys are constructed from carefully chosen questions to gather information from some group of people. Quite often, it’s not enough to just ask a single question. Surveys usually consist of multiple related questions that tries to measure a complex human behavior or characteristic, known as constructs. When the responses to these individual questions are taken together, they can be combined to form a score or scale. A survey is reliable if it gives similar results when taken again by a similar group of people.

Cronbach’s Alpha is a way to measure internal consistency reliability of the individual questions (or test items):

α=Nc¯v¯+(N1)c¯

, where c¯ is the average of the inter-item covariance, and v¯ is the average variance.

Let’s consider this sample data, where X1, X2, X3, and X4 are questions that take on integer values from 1 to 5 (Likert). So, all these questions form a scale. There are 10 rows, so 10 people filled out this survey.

idX1X2X3X4133332334333331454455214464445732338343394223103323

Let’s find c¯. The covariance matrix is:

X1X2X3X4X10.67780.4780.04440.456X20.47780.9890.13330.256X30.04440.1330.62220.489X40.45560.2560.48891.344

So, c¯=(0.478+0.0444+0.1333+0.456+0.256+0.489)/6=0.309, and v¯=(0.6778+0.989+0.6222+1.344)/4=0.908.

Then,

α=40.3090.908+30.309=0.674

A quick check in R gives the same result:

1
2
3
4
5
Cronbach's alpha for the 'df' data-set

Items: 4
Sample units: 10
alpha: 0.674

How do you use this value? According to George and Mallery (2003), an α >= 0.9 is excellent, >= 0.8 is good, >= 0.7 is acceptable, >= 0.6 is questionable, >= 0.5 is poor, and < 0.5 is unacceptable.

This post is licensed under CC BY 4.0 by the author.
Trending Tags