View Categories

Internal reliability / consistency

There’s a sometimes heated debate about whether the term “consistency” is or isn’t more appropriate than the term “reliability”. I favour “reliability” and may create a glossary entry to explain why one day. For now, if a peer reviewer doesn’t like you using “reliability”, well I favour taking them on but the issue is academic and not really important!

Details #

Internal reliability is one of the categories of ways to evaluate the reliability of a measure. It is only applicable to multi-item measures whether those are rating scales or self-report questionnaires. The basic idea is that if the items in the rating scale or questionnaire do reflect something we want to measure as a dimension on which the participants completing the measure differ, then the scores on the items, across a group of participants, should show strong intercorrelation with each other.

That’s it!

Well, the most used index of internal reliability is Cronbach’s coefficient alpha, if you like academic distinctions and the minutiae of psychometrics then McDonald’s omega has a claim to be a slightly better measure though its superiority turns pretty much entirely on how much you think our data fit the strictures of the multi-dimensional versus unidimensional variance in the data and whether the items show heteroscedasticity (variance varying across items) or not. See the entries for Cronbach’s alpha and McDonald’s omega.

One now deceased index, though an easy one to understand was the split-half reliability: the correlation between scores made up by two separate halves of the items in the measure. Hm, better create an entry for that though if you see it used in a report then either you are reading something from the 1960s or 70s or earlier, or something where someone is way behind developments in psychometrics. (However, I am much in favour of us not rejecting early reports either simply because they are old, or because they may not have kept up with sometimes rather “angels on the head of a pin” academic issues in psychometrics!

Try also #

Chapters #

Chapter 3, “How to judge the quality of an outcome measure” in the OMbook.

Online resources #

I will probably create an Rblog entry expanding on the issues in here and probably a shiny app that will give you various indices of internal reliability from your data, but none yet!

Dates #

First created 3.ix.25.

Powered by BetterDocs